diff --git a/pr-59/GettingStarted/QuickStartDemo/index.html b/pr-59/GettingStarted/QuickStartDemo/index.html index 54d7bef17..0a4df9611 100644 --- a/pr-59/GettingStarted/QuickStartDemo/index.html +++ b/pr-59/GettingStarted/QuickStartDemo/index.html @@ -2731,13 +2731,14 @@

Running the AWSIM demo -
    -
  1. Launch awsim_labs.x86_64. -
    ./<path to AWSIM folder>/awsim_labs.x86_64
    -

    It may take some time for the application to start the so please wait until image similar to the one presented below is visible in your application window.

    -

    -
  2. -
+

4. Launch awsim_labs.x86_64. +

./<path to AWSIM folder>/awsim_labs.x86_64
+
+
  It may take some time for the application to start the so please wait until image similar to the one presented below is visible in your application window.
+
+  ![](Image_0.png)
+
+

Launching Autoware#

diff --git a/pr-59/search/search_index.json b/pr-59/search/search_index.json index 03cbc89d0..a278fbeb2 100644 --- a/pr-59/search/search_index.json +++ b/pr-59/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#welcome-to-awsim-labs","title":"Welcome to AWSIM Labs","text":"

AWSIM Labs is currently being developed under the Autoware Labs initiative. Main purpose of this fork is to provide faster implementation of features needed by the users of the AWSIM while also ensuring a high-performance simulation environment for the Autoware.

This is a fork of TIER IV's AWSIM.

"},{"location":"#features","title":"Features","text":""},{"location":"#feature-differences-from-the-main-awsim","title":"Feature differences from the main AWSIM","text":"AWSIM AWSIM Labs Using HDRP Using URP Using Unity 2021.1.7f1 Using Unity LTS 2022.3.21f1 Limited interaction with simulation and UI Interactable simulation and UI Uses more resources Uses less resources - Multiple scene and vehicle setup"},{"location":"#try-the-simulation-demo-yourself","title":"Try the simulation demo yourself!","text":"

We don't have a release yet. Please build it from the source.

Download AWSIM Demo for Ubuntu

To test the AWSIM Labs demo with Autoware please refer to the Quick start demo section.

"},{"location":"Components/Clock/ClockPublisher/","title":"Clock Publisher","text":""},{"location":"Components/Clock/ClockPublisher/#introduction","title":"Introduction","text":"

ClockPublisher allows the publication of the simulation time from the clock operating within AWSIM. The current time is retrived from a TimeSource object via the SimulatorROS2Node. The AWSIM provides convenient method for selecting the appropriate time source type as well as the flexibility to implement custom TimeSources tailored to specific user requirements.

"},{"location":"Components/Clock/ClockPublisher/#setup","title":"Setup","text":"

To enable the publication of the current time during simulation execution, ClockPublisher must be included as a component within the scene. Moreover, to allow the TimeSource to be set or changed, the TimeSourceSelector object must also be present in the active scene.

"},{"location":"Components/Clock/ClockPublisher/#selecting-time-source","title":"Selecting Time Source","text":"

The desired TimeSource can be selected in two ways:

"},{"location":"Components/Clock/ClockPublisher/#list-of-time-sources","title":"List of Time Sources","text":"Type String Value for JSON Config Description UNITY unity based on the time of the Unity Engine SS2 ss2 driven by an external source, used by the scenario simulator v2 DOTNET_SYSTEM system based on system time, starting with time since UNIX epoch, progressing according to simulation timescale DOTNET_SIMULATION simulation based on system time, starting with zero value, progressing according to simulation timescale ROS2 ros2 based on ROS2 time (system time by default)"},{"location":"Components/Clock/ClockPublisher/#architecture","title":"Architecture","text":"

The ClockPublisher operates within a dedicated thread called the 'Clock' thread. This design choice offers significant advantages by freeing the publishing process from the constraints imposed by fixed update limits. As a result, ClockPublisher is able to consistently publish time at a high rate, ensuring stability and accuracy.

"},{"location":"Components/Clock/ClockPublisher/#accessing-time-source","title":"Accessing Time Source","text":"

Running the clock publisher in a dedicated thread introduced the challenge of accessing shared resources by different threads. In our case, the Main Thread and Clock Thread compete for TimeSoruce resources. The diagram below illustrates this concurrent behaviour, with two distinct threads vying for access to the TimeSource:

Given multiple sensors, each with its own publishing frequency, alongside a clock running at 100Hz, there is a notable competition for TimeSource resources. In such cases, it becomes imperative for the TimeSource class to be thread-safe.

"},{"location":"Components/Clock/ClockPublisher/#thread-safe-time-source","title":"Thread-Safe Time Source","text":"

The TimeSource synchronization mechanism employs a mutex to lock the necessary resource for the current thread. The sequence of actions undertaken each time the GetTime() method is called involves:

"},{"location":"Components/Clock/ClockPublisher/#extensions","title":"Extensions","text":"

There are two additional classes used to synchronise the UnityEngine TimeAsDouble and TimeScale values between threads:

"},{"location":"Components/Environment/AWSIMEnvironment/","title":"AWSIM Environment","text":""},{"location":"Components/Environment/AWSIMEnvironment/#awsim-environment","title":"AWSIM Environment","text":""},{"location":"Components/Environment/AWSIMEnvironment/#introduction","title":"Introduction","text":"

Environment is an object that contains all the elements visible on the scene along with components that affect how they are rendered. It contains several objects aggregating static environment objects in terms of their type. Moreover, it contains elements responsible for controlling random traffic.

Own Environment prefab

If you would like to develop your own prefab Environment for AWSIM, we encourage you to read this tutorial.

AutowareSimulation scene

If you would like to see how Environment with random traffic works or run some tests, we encourage you to familiarize yourself with the AutowareSimulation scene described in this section.

Prefab Environment is also used to create a point cloud (*.pcd file) needed to locate the EgoVehicle in the simulated AWSIM scene. The point cloud is created using the RGL plugin and then used in Autoware. We encourage you to familiarize yourself with an example scene of creating a point cloud - described here.

Create PointCloud (*.pcd file)

If you would like to learn how to create a point cloud in AWSIM using Environment prefab, we encourage you to read this tutorial.

"},{"location":"Components/Environment/AWSIMEnvironment/#architecture","title":"Architecture","text":"

The architecture of an Environment - with dependencies between components - is presented on the following diagram.

"},{"location":"Components/Environment/AWSIMEnvironment/#prefabs","title":"Prefabs","text":"

Prefabs can be found under the following path:

Name Description Path Nishishinjuku Only stationary visual elements, no traffic Assets/AWSIM/Prefabs/Environments/Nishishinjuku.prefab Nishishinjuku RandomTraffic Stationary visual elements along with random traffic Assets/AWSIM/Prefabs/Environments/Nishishinjuku RandomTraffic.prefab Nishishinjuku Traffic Stationary visual elements along with non-random traffic Assets/AWSIM/Prefabs/Environments/Nishishinjuku Traffic.prefab

Environment prefab

Due to the similarity of the above prefabs, this section focuses on prefab Nishishinjuku RandomTraffic. The exact differences between Nishishinjuku RandomTraffic and Nishishinjuku Traffic will be described in the future.

Environment name

In order to standardize the documentation, the name Environment will be used in this section as the equivalent of the prefab named Nishishinjuku RandomTraffic.

Nishishinjuku RandomTraffic prefab has the following content:

As you can see it contains:

All of these objects are described below in this section.

"},{"location":"Components/Environment/AWSIMEnvironment/#visual-elements","title":"Visual elements","text":"

Nishishinjuku RandomTraffic prefab contains many visual elements which are described here.

"},{"location":"Components/Environment/AWSIMEnvironment/#link-in-the-default-scene","title":"Link in the default Scene","text":"

Nishishinjuku RandomTraffic prefab is added to the Environment object - between which there is rotation about the Oy axis by 90 degrees. This rotation is added because of the differences in coordinate alignments between the Nishishinjuku RandomTraffic prefab objects (which have been modeled as *.fbx files) and the specifics of the GridZone definition (more on this is described here).

Object Environment is added to AutowareSimulation which is added directly to the main parent of the scene - there are no transformations between these objects.

"},{"location":"Components/Environment/AWSIMEnvironment/#components","title":"Components","text":"

Nishishinjuku RandomTraffic (Environment) prefab contains only one component:

"},{"location":"Components/Environment/AWSIMEnvironment/#layers","title":"Layers","text":"

In order to enable the movement of vehicles around the environment, additional layers have been added to the project: Ground and Vehicle.

All objects that are acting as a ground for NPCVehicles and EgoVehicle to move on have been added to Ground layer - they cannot pass through each other and should collide for the physics engine to calculate their interactions.

For this purpose, NPCVehicles and EgoVehicle have been added to the Vehicle layer.

In the project physics settings, it is ensured that collisions between objects in the Vehicle layer are disabled (this applies to EgoVehicle and NPCVehicles - they do not collide with each other):

"},{"location":"Components/Environment/AWSIMEnvironment/#traffic-components","title":"Traffic Components","text":"

Due to the specificity of the use of RandomTrafficSimulator, TrafficIntersections, TrafficLanes, StopLines objects, they have been described in a separate section Traffic Components - where all the elements necessary in simulated random traffic are presented.

"},{"location":"Components/Environment/AWSIMEnvironment/#visual-elements-sjk","title":"Visual Elements (SJK)","text":"

The visuals elements have been loaded and organized using the *.fbx files which can be found under the path:

Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_optimized/Models/*\n

Environment prefab contains several objects aggregating stationary visual elements of space by their category:

Scene Manager

For models (visual elements) added to the prefab to work properly with the LidarSensor sensor using RGL, make sure that the SceneManager component is added to the scene - more about it is described in this section.

In the scene containing Nishishinjuku RandomTrafficprefab Scene Manager (script) is added as a component to the AutowareSimulation object containing the Environment.

"},{"location":"Components/Environment/AWSIMEnvironment/#trafficlights","title":"TrafficLights","text":"

TrafficLights are a stationary visual element belonging to the SJK01_P03 group. The lights are divided into two types, the classic TrafficLights used by vehicles at intersections and the PedestrianLights found at crosswalks.

Classic traffic lights are aggregated at object TrafficLightA01_Root01_ALL_GP01

while lights used by pedestrians are aggregated at object TrafficLightB01_Root01_All_GP01.

TrafficLights and PedestrianLights are developed using models available in the form of *.fbx files, which can be found under the following path: Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Models/*

"},{"location":"Components/Environment/AWSIMEnvironment/#classic-trafficlights","title":"Classic TrafficLights","text":"

TrafficLights lights, outside their housing, always contain 3 signaling light sources of different colors - from left to right: green, yellow, red. Optionally, they can have additional sources of signaling the ability to drive in a specific direction in the form of one or three signaling arrows.

In the environment there are many classic lights with different signaling configurations. However, each contains:

"},{"location":"Components/Environment/AWSIMEnvironment/#materials","title":"Materials","text":"

An important element that is configured in the TrafficLights object are the materials in the Mesh Renderer component. Material with index 0 always applies to the housing of the lights. Subsequent elements 1-6 correspond to successive slots of light sources (round luminous objects) - starting from the upper left corner of the object in the right direction, to the bottom and back to the left corner. These indexes are used in script Traffic Light (script) - described here.

Materials for lighting slots that are assigned in Mesh Renderer can be found in the following path: Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Materials/*

"},{"location":"Components/Environment/AWSIMEnvironment/#pedestrianlights","title":"PedestrianLights","text":"

PedestrianLights lights, outside their housing, always contain 2 signaling light sources of different colors - red on top and green on the bottom.

In the environment there are many pedestrian lights - they have the same components as classic TrafficLights, but the main difference is the configuration of their materials.

"},{"location":"Components/Environment/AWSIMEnvironment/#materials_1","title":"Materials","text":"

An important element that is configured in the PedestrianLights object are the materials in the Mesh Renderer component. Material with index 0 always applies to the housing of the lights. Subsequent elements 1-2 correspond to successive slots of light sources (round luminous objects) - starting from top to bottom. These indexes are used in script Traffic Light (script) - described here.

Materials for lighting slots that are assigned in Mesh Renderer can be found in the following path: Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Materials/*

"},{"location":"Components/Environment/AWSIMEnvironment/#volume","title":"Volume","text":"

Volume is GameObject with Volume component which is used in the High Definition Render Pipeline (HDRP). It defines a set of scene settings and properties. It can be either global, affecting the entire scene, or local, influencing specific areas within the scene. Volumes are used to interpolate between different property values based on the Camera's position, allowing for dynamic changes to environment settings such as fog color, density, and other visual effects.

In case of prefab Nishishinjuku RandomTraffic volume works in global mode and has loaded Volume profile. This volume profile has a structure that overrides the default properties of Volume related to the following components: Fog, Shadows, Ambient Occlusion, Visual Environment, HDRI Sky. It can be found in the following path: Assets/AWSIM/Prefabs/Environments/Nishishinjuku/Volume Profile.asset

"},{"location":"Components/Environment/AWSIMEnvironment/#directional-light","title":"Directional Light","text":"

Directional Light is GameObject with Light component which is used in the High Definition Render Pipeline (HDRP). It controls the shape, color, and intensity of the light. It also controls whether or not the light casts shadows in scene, as well as more advanced settings.

In case of prefab Nishishinjuku RandomTraffic a Directional type light is added. It creates effects that are similar to sunlight in scene. This light illuminates all GameObjects in the scene as if the light rays are parallel and always from the same direction. Directional light disregards the distance between the Light itself and the target, so the light does not diminish with distance. The strength of the Light (Intensity) is set to 73123.09 Lux. In addition, a Shadow Map with a resolution of 4096 is enabled, which is updated in Every Frame of the simulation. The transform of the Directional Light object is set in such a way that it shines on the environment almost vertically from above.

"},{"location":"Components/Environment/AWSIMEnvironment/#npcpedestrians","title":"NPCPedestrians","text":"

NPCPedestrians is an aggregating object for NPCPedestrian objects placed in the environment. Prefab Nishishinjuku RandomTraffic has 7 NPCPedestrian (humanElegant) prefabs defined in selected places. More about this NPCPedestrian prefab you can read in this section.

"},{"location":"Components/Environment/AWSIMEnvironment/#environment-script","title":"Environment (script)","text":"

Environment (script) contains the information about how a simulated Environment is positioned in real world. That means it describes what is the real world position of a simulated Environment.

AWSIM uses part of a Military Grid Reference System (MGRS). To understand this topic, you only need to know, that using MGRS you can specify distinct parts of the globe with different accuracy. For AWSIM the chosen accuracy is a 100x100 km square. Such a square is identified with a unique code like 54SUE (for more information on Grid Zone please see this page).

Inside this Grid Zone the exact location is specified with the offset calculated from the bottom-left corner of the Grid Zone. You can interpret the Grid Zone as a local coordinate system in which you position the Environment.

In the Nishishinjuku RandomTraffic prefab, the simulated Environment is positioned in the Grid Zone 54SUE. The offset if equal to 81655.73 meters in the Ox axis, 50137.43 meters in the Oy axis and 42.49998 meters in the Oz axis. In addition to this shift, it is also necessary to rotate the Environment in the scene by 90 degrees about the Oy axis - this is ensured by the transform in the prefab object.

This means that the 3D models were created in reference to this exact point and because of that the 3D models of Environment align perfectly with the data from Lanelet2.

The essence of Environment (script)

The Environment (script) configuration is necessary at the moment of loading data from Lanelet2.

Internally it shifts the elements from Lanelet2 by the given offset so that they align with the Environment that is located at the local origin with no offset.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/","title":"Add Environment","text":""},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#add-an-environment","title":"Add an Environment","text":"

Environment is an important part of a Scene in AWSIM. Every aspect of the simulated surrounding world needs to be included in the Environment prefab - in this section you will learn how to develop it. However, first Lanelet2 needs to be developed along with 3D models of the world, which will be the main elements of this prefab.

Tip

If you want to learn more about the Environment at AWSIM, please visit this page.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#create-a-lanelet2","title":"Create a Lanelet2","text":"

Before you start creating Lanelet2, we encourage you to read the documentation to find out what Lanelet2 is all about. Lanelet2 can be created using VectorMapBuilder (VMP) based on the PCD obtained from real-life LiDAR sensor.

When working with the VMP, it is necessary to ensure the most accurate mapping of the road situation using the available elements. Especially important are TrafficLanes created in VMB as connected Road Nodes and StopLines created in VMB as Road Surface Stoplines.

Lanelet2 positioning

Lanelet2 should be created in MGRS coordinates of the real place you are recreating. Please position your Lanelet2 relative to the origin (bottom left corner) of the MGRS Grid Zone with the 100 km Square ID in which the location lays. More details can be read here.

You can think of the Grid Zone as a local coordinate system. Instead of making global (0,0) point (crossing of Equator and Prime Median) our coordinate system origin we take a closer one. The MGRS Grid Zone with 100 km Square ID code designates a 100x100 [kmxkm] square on the map and we take its bottom left corner as our local origin.

Example

Lets examine one node from an example Lanelet2 map:

<node id=\"4\" lat=\"35.68855194431519\" lon=\"139.69142711058254\">\n    <tag k=\"mgrs_code\" v=\"54SUE815501\"/>\n    <tag k=\"local_x\" v=\"81596.1357\"/>\n    <tag k=\"local_y\" v=\"50194.0803\"/>\n    <tag k=\"ele\" v=\"34.137\"/>\n</node>\n

The node with id=\"4\" position is described as absolute coordinates given in the <node>. In this example the coordinates are as follows lat=\"35.68855194431519\" lon=\"139.69142711058254.

It is also described as local transformation defined as a translation relative to the origin of the MGRS Grid Zone with 100 km Square ID (bottom left corner). The MGRS Grid Zone designation with 100 km Square ID in this case is equal to 54SUE. In this example the offset in the X axis is as follows k=\"local_x\" v=\"81596.1357\" and the offset in the Y axis is as follows k=\"local_y\" v=\"50194.0803\".

Note that elevation information is also included.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#create-3d-models","title":"Create 3D models","text":"

You can create 3D models of an Environment as you wish. It is advised however, to prepare the models in form of .fbx files. Additionally you should include materials and textures in separate directories. Many models are delivered in this format. This file format allows you to import models into Unity with materials and replace materials while importing. You can learn more about it here.

You can see a .fbx model added and modified on the fly in the example of this section.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#guidelines","title":"Guidelines","text":"

To improve the simulation performance of a scene containing your Environment prefab, please keep in mind some of these tips when creating 3D models:

  1. Prefer more smaller models over a few big ones.

    In general it is beneficial for performance when you make one small mesh of a object like tree and reuse it on the scene placing many prefabs instead of making one giant mesh containing all trees on the given scene. It is beneficial even in situations when you are not reusing the meshes. Lets say you have a city with many buildings - and every one of those buildings is different - it is still advised to model those building individually and make them separate GameObjects.

  2. Choose texture resolution appropriately.

    Always have in mind what is the target usage of your texture. Avoid making a high resolution texture for a small object or the one that will always be far away from the camera. This way you can save some computing power by not calculating the details that will not be seen because of the screen resolution.

    !!! tip \"Practical advice\" You can follow these simple rules when deciding on texture quality (texel density):

      - For general objects choose 512px/m (so the minimum size of texture is 512/512)\n  - For important objects that are close to the camera choose 1024px/m (so the minimum size of texture is 1024/1024)\n
  3. (optional) Add animation.

    Add animations to correct objects. If some element in the 3D model are interactive they should be divided into separate parts.

What's more, consider these tips related directly to the use of 3D models in AWSIM:

  1. Creating a 3D model based on actual point cloud data makes it more realistic.
  2. AWSIM is created using HDRP (High Definition Rendering Pipeline) which performs better when object meshes are merged.
  3. Occlusion culling and flutter culling cannot be used because the sensors detection target will disappear.
  4. Each traffic light should have a separate GameObject. Also, each light in the traffic light should be split into separate materials.
"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#create-an-environment-prefab","title":"Create an Environment prefab","text":"

In this part, you will learn how to create a Environment prefab - that is, develop a GameObject containing all the necessary elements and save it as a prefab.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#1-add-3d-models","title":"1. Add 3D models","text":"

In this section we will add roads, buildings, greenery, signs, road markings etc. to our scene.

Most often your models will be saved in the .fbx format. If so, you can customize the materials in the imported model just before importing it. Sometimes it is necessary as models come with placeholder materials. You can either

In order to add 3D models from the .fbx file to the Scene please do the following steps:

  1. In the Project view navigate to the directory where the model is located and click on the model file.
  2. Now you can customize the materials used in the model in the Inspector view.
  3. Drag the model into the Scene where you want to position it.
  4. Move the Object in the Hierarchy tree appropriately.
  5. (optional) Now you can save this model configuration as a prefab to easily reuse it. Do this by dragging the Object from the Scene into the Project view. When you get a warning make sure to select you want to create an original new prefab.

Example

An example video of the full process of importing a model, changing the materials, saving new model as a prefab and importing the new prefab.

When creating a complex Environment with many elements you should group them appropriately in the Hierarchy view. This depends on the individual style you like more, but it is a good practice to add all repeating elements into one common Object. E.g. all identical traffic lights grouped in TrafficLights Object. The same can be done with trees, buildings, signs etc. You can group Objects as you like.

Object hierarchy

When adding elements to the Environment that are part of the static world (like 3D models of buildings, traffic lights etc.) it is good practice to collect them in one parent GameObject called Map or something similar.

By doing this you can set a transformation of the parent GameObject Map to adjust the world pose in reference to e.g. loaded objects from Lanelet2.

Remember to unpack

Please remember to unpack all Object added into the scene. If you don't they will change materials together with the .fbx model file as demonstrated in the example below.

This is unwanted behavior. When you import a model and change some materials, but leave the rest default and don't unpack the model, then your instances of this model on the scene may change when you change the original fbx model settings.

See the example below to visualize what is the problem.

Example

In this example we will

Watch what happens, the instance on the Scene changes the materials together with the model. This only happens if you don't unpack the model.

Example Environment after adding 3D models

After completing this step you should have an Environment Object that looks similar to the one presented below.

The Environment with 3D models can look similar to the one presented below.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#2-add-an-environment-script","title":"2. Add an Environment Script","text":"

Add an Environment Script as component in the Environment object (see the last example in section before). It does not change the appearance of the Environment, but is necessary for the simulation to work correctly.

  1. Click on the Add Component button in the Environment object.

  2. Search for Environment and select it.

  3. Set the MGRS to the offset of your Environment as explained in this section.

Info

Due to the differences between VectorMapBuilder and Unity, it may be necessary to set the transform of the Environment object. The transform in Environment should be set in such a way that the TrafficLanes match the modeled roads. Most often it is necessary to set the positive 90 degree rotation over Y axis.

This step should be done after importing items from lanelet2. Only then will you know if you have Environment misaligned with items from lanelet2.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#3-add-a-directional-light","title":"3. Add a Directional Light","text":"
  1. Create a new child Object of the Environment and name it Directional Light.

  2. Click Add Component button, search for Light and select it.

  3. Change light Type to Directional.

  4. Now you can configure the directional light as you wish. E.g. change the intensity or orientation.

Tip

For more details on lighting check out official Unity documentation.

Example Environment after adding Directional Light

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#4-add-a-volume","title":"4. Add a Volume","text":"
  1. Create a new child object of the Environment and name it Volume.

  2. Click Add Component search for Volume and select it.

  3. Change the Profile to Volume Profile and wait for changes to take effect.

  4. Now you can configure the Volume individually as you wish.

Tip

For more details on volumes checkout official Unity documentation.

Example Environment after adding Volume

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#5-add-npcpedestrians","title":"5. Add NPCPedestrians","text":"
  1. Make NPCPedestrians parent object.

  2. Open Assets/AWSIM/Prefabs/NPCs/Pedestrians in Project view and drag a humanElegant into the NPCPedestrians parent object.

  3. Click Add Component in the humanElegant object and search for Simple Pedestrian Walker Controller Script and select it.

    This is a simple Script that makes the pedestrian walk straight and turn around indefinitely. You can configure pedestrian behavior with 2 parameters.

    - Duration - how long will the pedestrian walk straight - Speed - how fast will the pedestrian walk straight

    !!!tip The Simple Pedestrian Walker Controller Script is best suited to be used on pavements.

  4. Finally position the NPCPedestrian on the scene where you want it to start walking.

    !!! warning Remember to set correct orientation, as the NPCPedestrian will walk straight from the starting position with the starting orientation.

Example Environment after adding NPC Pedestrians

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#6-save-an-environment-prefab","title":"6. Save an Environment prefab","text":"

After doing all the previous steps and having your Environment finished you can save it to prefab format.

  1. Find an Environments directory in the Project view (Assets/AWSIM/Prefabs/Environments).
  2. Drag the Environment Object into the Project view.
  3. (optional) Change the prefab name to recognize it easily later.

Success

Once you've added the Environment, you need to add and configure TrafficLights. For details please visit this tutorial.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/","title":"Add Random Traffic","text":"

To add a Random Traffic to your scene you need the Random Traffic Simulator Script.

  1. Create a new Game Object as a child of Environment and call it RandomTrafficSimulator.

  2. Click a button Add Component in the Inspector to add a script.

  3. A small window should pop-up. Search for RandomTrafficSimulator script and add it by double clicking it or by pressing enter.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#basic-configuration","title":"Basic Configuration","text":"

After clicking on the newly created RandomTrafficSimulator object in the Scene tree you should see something like this in the Inspector view.

Random Traffic Simulator, as the name suggests, generates traffic based on random numbers. To replicate situations you can set a specific seed.

You can also set Vehicle Layer Mask and Ground Layer Mask. It is important to set these layers correctly, as they are a base for vehicle physics. If set incorrectly the vehicles may fall through the ground into the infinity.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#add-npcvehicles","title":"Add NPCVehicles","text":"

Random Traffic Simulator Script moves, spawns and despawns vehicles based on the configuration. These settings can to be adjusted to your preference.

  1. Setting Max Vehicle Count.

    This parameter sets a limit on how many vehicles can be added to the scene at one time.

  2. NPC Prefabs

    These are models of vehicles that should be spawned on the scene, to add NPC Prefabs please follow these steps:

    1. To do this click on the \"+\" sign and in the new list element at the bottom and click on the small icon on the right to select a prefab.

      <!-- <img src=\"add_npc_prefab1.gif\" alt=\"Add npc list element gif\" width=\"500\"/> -->\n

    2. Change to the Assets tab in the small windows that popped-up.

      <!-- <img src=\"add_npc_prefab2.gif\" alt=\"Select Assets tab gif\" width=\"500\"/> -->\n

    3. Search for the Vehicle prefab you want to add, e.g. Hatchback.

      <!-- <img src=\"add_npc_prefab3.gif\" alt=\"Search a prefab gif\" width=\"500\"/> -->\n

    Available NPC prefabs are shown in the NPC Vehicle section.

    !!! tip \"Control NPC Vehicle spawning\" Random Traffic Simulator Script will on random select one prefab from Npc Prefabs list every time when there are not enough vehicles on the scene (the number of vehicles on the scene is smaller than the number specified in the Max Vehicle Count field).

      You can control the odds of selecting one vehicle prefab over another by adding more than one instance of the same prefab to this list.\n
"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#add-spawnable-lanes","title":"Add spawnable lanes","text":"

Spawnable lanes are the lanes on which new vehicles can be spawned by the Random Traffic Simulator Script. Best practice is to use beginnings of the lanes on the edges of the map as spawnable lanes.

Warning

Make sure you have a lanelet added into your scene. The full tutorial on this topic can be found here.

Adding spawnable lanes is similar to Adding NPC Prefabs.

  1. Add an element to the Spawnable Lanes list by clicking on the \"+\" symbol or by selecting number of lanes directly.

  2. Now you can click on the small icon on the right of the list element and select a Traffic Lane you are interested in.

    Unfortunately all Traffic Lanes have the same names so it can be difficult to know which one to use. Alternatively you can do the following to add a traffic lane by visually selecting it in the editor:

    - Lock RandomTrafficSimulator in the Inspector view.

      <img src=\"add_traffic_lane3.gif\" alt=\"Lock inspector view gif\" width=\"500\"/>\n

    - Select the Traffic Lane you are interested in on the Scene and as it gets highlighted in the Hierarchy view you can now drag and drop this Traffic Lane into the appropriate list element.

      ![Select traffic lane and drag it to the list gif](add_traffic_lane4.gif)\n
"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#vehicles-configuration","title":"Vehicles configuration","text":"

The last thing to configure is the behavior of NPCVehicles. You can specify acceleration rate of vehicles and three values of deceleration.

Question

This configuration is common for all vehicles managed by the Random Traffic Simulator Script.

Success

The last thing that needs to be done for RandomTraffic to work properly is to add intersections with traffic lights and configure their sequences. Details here.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/","title":"Add Traffic Intersection","text":"

Every TrafficIntersection on the scene needs to be added as a GameObject. Best practice is to create a parent object TrafficIntersections and add all instances of TrafficIntersection as its children. You can do this the same as with Random Traffic Simulator.

Traffic Lights configuration

Before performing this step, check all TrafficLights for correct configuration and make sure that TrafficLights have added scripts. If you want to learn how to add and configure it check out this tutorial.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#add-a-box-collider","title":"Add a Box Collider","text":"
  1. TrafficIntersection needs to be marked with a box collider. First click on the Add Component button.

  2. In the window that popped up search for Box Collider and select it.

  3. Then set the position and orientation and size of the Box Collider. You can do this by manipulating Box Collider properties Center and Size in the Inspector view.

    !!! info \"Traffic Intersection Box Collider guidelines\" When adding a Box Collider marking your Traffic Intersection please make sure that

      - It is **not** floating over the ground - there is no gap between the Box Collider and The Traffic Intersection\n  - It is high enough to cover all Vehicles that will be passing through the Intersection\n  - It accurately represents the shape, position and orientation of the Traffic Intersection\n
"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#add-a-traffic-intersection-script","title":"Add a Traffic Intersection Script","text":"
  1. Click on the Add Component button.

  2. In the window that popped up search for Traffic Intersection and select it.

  3. You need to set a proper Collider Mask in order for the script to work.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#create-traffic-light-groups","title":"Create traffic light groups","text":"

Traffic Light Groups are groups of traffic lights that are synchronized, meaning they light up with the same color and pattern at all times.

Traffic lights are divided into groups to simplify the process of creating a lighting sequence. By default you will see 4 Traffic Light Groups, you can add and remove them to suit your needs.

  1. First choose from the drop-down menu called Group the Traffic Light Group name you want to assign to your Traffic Light Group.

  2. Then add as many Traffic Lights as you want your group to have. From the drop-down menu select the Traffic Lights you want to add.

    !!! tip \"Select Traffic Lights visually\" If you have a lot of Traffic Lights it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#create-lighting-sequences","title":"Create lighting sequences","text":"

Lighting Sequences is a list of commands based on which the Traffic Lights will operate on an intersection. The elements in the Lighting Sequences list are changes (commands) that will be executed on the Traffic Light Groups.

Group Lighting Order should be interpreted as a command (or order) given to all Traffic Lights in selected Traffic Light Group. In Group Lighting Orders you can set different traffic light status for every Traffic Light Group (in separate elements). Lighting sequences list is processed in an infinite loop.

It should be noted that changes applied to one Traffic Light Group will remain the same until the next Group Lighting Order is given to this Traffic Light Group. This means that if in one Group Lighting Order no command is sent to a Traffic Light Group then this Group will remain its current lighting pattern (color, bulb and status).

For every Lighting Sequences Element you have to specify the following

  1. Interval Sec

    This is the time for which the sequence should wait until executing next order, so how long this state will be active.

  2. For every element in Group Lighting Orders there needs to be specified

    1. Group to which this order will be applied 2. List of orders (Bulb Data)

      In other words - what bulbs should be turned on, their color and pattern.\n\n  - Type - What type of bulb should be turned on\n  - Color - What color this bulb should have (in most cases this will be the same as color of the bulb if specified)\n  - Status - How the bulb should light up (selecting `SOLID_OFF` is necessary only when you want to turn the Traffic Light completely off, meaning **no** bulb will light up)\n\n  !!!note\n      When applying the change to a Traffic Light\n\n      - First all bulbs are turned off\n      - Only after that changes made in the order are applied\n\n      This means it is only necessary to supply the data about what bulbs should be turned on.\n      E.g. you don't have to turn off a red bulb when turning on the green one.\n

Warning

The first Element in the Lighting Sequences (in most cases) should contain bulb data for every Traffic Light Group. Traffic Light Groups not specified in the first Element will not light up at the beginning of the scene.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#example","title":"Example","text":"

Lets consider the following lighting sequence element.

In the Lighting Sequence Element 5 we tell all Traffic Lights in the Vehicle Traffic Light Group 2 to light up their Green Bulb with the color Green and status Solid On which means that they will be turned on all the time. We also implicitly tell them to turn all other Bulbs off.

In the same time we tell all Traffic Lights in the Pedestrian Traffic Light Group 2 to do the very same thing.

This state will be active for the next 15 seconds, and after that Traffic Intersection will move to the next Element in the Sequence.

Now lets consider the following Lighting Sequences Element 6.

Here we order the Traffic Lights in the Pedestrian Traffic Light Group 2 to light up their Green Bulb with the color Green and status Flashing. We also implicitly tell them to turn all other bulbs off, which were already off from the implicit change in Element 5, so this effectively does nothing.

Note that Lighting Sequences Element 6 has no orders for Vehicle Traffic Light Group 2. This means that Traffic Lights in the Vehicle Traffic Light Group 2 will hold on to their earlier orders.

This state will be active for 5 seconds, which means that Traffic Lights in the Vehicle Traffic Light Group 2 will be lighting solid green for the total of 20 seconds.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#how-to-test","title":"How to test","text":"

To test how your Traffic Intersection behaves simply run the Scene as shown here (but don't launch Autoware). To take a better look at the Traffic Lights you can change to the Scene view by pressing ctrl + 1 - now you can move the camera freely (to go back to the Game view simply press ctrl + 2).

As the time passes you can examine whether your Traffic Intersection is configured correctly.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/","title":"Load Items From Lanelet","text":""},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#load-items-from-lanelet2","title":"Load items from lanelet2","text":"

To add RandomTraffic to the Environment, it is necessary to load elements from the lanelet2. As a result of loading, TrafficLanes and StopLines will be added to the scene. Details of these components can be found here.

Warning

Before following this tutorial make sure you have added an Environment Script and set a proper MGRS offset position. This position is used when loading elements from the lanelet2!

  1. Click on the AWSIM button in the top menu of the Unity editor and navigate to AWSIM -> Random Traffic -> Load Lanelet.

  2. In the window that pops-up select your osm file, change some Waypoint Settings to suit your needs and click Load.

    !!! info \"Waypoint Settings explanation\" - Resolution - resolution of resampling. Lower values provide better accuracy at the cost of processing time - Min Delta Length - minimum length(m) between adjacent points - Min Delta Angle - minimum angle(deg) between adjacent edges. Lowering this value produces a smoother curve

  3. Traffic Lanes and Stop Lanes should occur in the Hierarchy view. If they appear somewhere else in your Hierarchy tree, then move them into the Environment object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#complete-loaded-trafficlanes","title":"Complete loaded TrafficLanes","text":"

The Traffic Lanes that were loaded should be configures accordingly to the road situation. The aspects you can configure

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#how-to-test","title":"How to test","text":"

If you want to test your Traffic Lanes you have to try running a Random Traffic. To verify one particular Traffic Lane or Traffic Lane connection you can make a new spawnable lane next to the Traffic Lane you want to test. This way you can be sure NPCVehicles will start driving on the Traffic Lane you are interested in at the beginning.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#add-a-stopline-manually","title":"Add a StopLine manually","text":"

When something goes wrong when loading data from lanelet2 or you just want to add another StopLine manually please do the following

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#1-add-a-gameobject","title":"1. Add a GameObject","text":"

Add a new GameObject StopLine in the StopLines parent object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#2-add-a-stopline-script","title":"2. Add a StopLine Script","text":"

Add a StopLine Script by clicking 'Add Component' and searching for Stop Line.

Example

So far your Stop Line should look like the following

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#3-set-points","title":"3. Set points","text":"

Set the position of points Element 0 and Element 1. These Elements are the two end points of a Stop Line. The Stop Line will span between these points.

You don't need to set any data in the 'Transform' section as it is not used anyway.

StopLine coordinate system

Please note that the Stop Line Script operates in the global coordinate system. The transformations of StopLine Object and its parent Objects won't affect the Stop Line.

Example

In this example you can see that the Position of the Game Object does not affect the position and orientation of the Stop Line.

For a Game Object in the center of the coordinate system.

The stop Line is in the specified position.

However with the Game Object shifted in X axis.

The Stop Line stays in the same position as before, not affected by any transformations happening to the Game Object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#4-has-stop-sign","title":"4. Has Stop Sign","text":"

Select whether there is a Stop Sign.

Select the Has Stop Sign tick-box confirming that this Stop Line has a Stop Sign. The Stop Sign can be either vertical or horizontal.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#5-select-a-traffic-light","title":"5. Select a Traffic Light","text":"

Select from the drop-down menu the Traffic Light that is on the Traffic Intersection and is facing the vehicle that would be driving on the Traffic Lane connected with the Stop Line you are configuring.

In other words select the right Traffic Light for the Lane on which your Stop Line is placed.

Select Traffic Lights visually

If you have a lot of Traffic Lights it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#6-configure-the-traffic-lane","title":"6. Configure the Traffic Lane","text":"

Every Stop Line has to be connected to a Traffic Lane. This is done in the Traffic Lane configuration. For this reason please check the Traffic Lane section for more details.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#add-a-trafficlane-manually","title":"Add a TrafficLane manually","text":"

It is possible that something may go wring when reading a lanelet2 and you need to add an additional Traffic Lane or you just want to add it. To add a Traffic Lane manually please follow the steps below.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#1-add-a-gameobject_1","title":"1. Add a GameObject","text":"

Add a new Game Object called TrafficLane into the TrafficLanes parent Object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#2-add-a-traffic-lane-script","title":"2. Add a Traffic Lane Script","text":"

Click the 'Add Component' button and search for the Traffic lane script and select it.

Example

So far your Traffic Lane should look like the following.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#3-configure-waypoints","title":"3. Configure Waypoints","text":"

Now we will configure the 'Waypoints' list. This list is an ordered list of nest points defining the Traffic Lane. When you want to add a waypoint to a Traffic Lane just click on the + button or specify the number of waypoints on the list in the field with number to the right from 'Waypoints' identifier.

The order of elements on this list determines how waypoints are connected.

Traffic Lane coordinate system

Please note that the Traffic Lane waypoints are located in the global coordinate system, any transformations set to a Game Object or paren Objects will be ignored.

This behavior is the same as with the Stop Line. You can see the example provided in the Stop Line tutorial.

General advice

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#4-select-the-turn-direction","title":"4. Select the Turn Direction","text":"

You also need to select the Turn Direction. This field describes what are the vehicles traveling on ths Traffic Lane doing in reference to other Traffic Lanes. You need to select whether the vehicles are

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#5-configure-next-lanes","title":"5. Configure Next Lanes","text":"

You need to add all Traffic Lanes that have their beginning in the end of this Traffic Lane into the Next Lanes list. In other words if the vehicle can choose where he wants to drive (e.g. drive straight or drive left with choice of two different Traffic Lines).

To do this click the + sign in the Next Lanes list and in the element that appeared select the correct Traffic Lane.

Next Lane example

Lets consider the following Traffic Intersection.

In this example we will consider the Traffic Lane driving from the bottom of the screen and turning right. After finishing driving in this Traffic Lane the vehicle has a choice of 4 different Traffic Lanes each turning into different lane on the parallel road.

All 4 Traffic Lanes are connected to the considered Traffic Lane. This situation is reflected in the Traffic Lane configuration shown below.

Select Traffic Lanes visually

If you have a lot of Traffic Lanes it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#6-configure-previous-lanes","title":"6. Configure Previous Lanes","text":"

Traffic Lane has to have previous Traffic Lanes configured. This is done in the exact same way as configuring next lanes which was shown in the previous step. Please do the same, but add Traffic Lanes that are before the configured one instead of the ones after into the Prev Lanes list.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#7-configure-right-of-way","title":"7. Configure Right Of Way","text":"

Now we will configure the Right Of Way Lanes. The Right Of Way Lanes is a list of Traffic Lanes that have a priority over the configured one. The process of adding the Right Of Way Lanes is the same as with adding Next Lanes. For this reason we ask you to see the aforementioned step for detailed description on how to do this (the only difference is that you add Traffic Lanes to the Right Of Way Lanes list).

Right of way example

In this example lets consider the Traffic Lane highlighted in blue from the Traffic Intersection below.

This Traffic Lane has to give way to all Traffic Lanes highlighted on yellow. This means all of the yellow Traffic Lanes have to be added to the 'Right Of Way Lanes' list which is reflected on the configuration shown below.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#8-add-stop-line","title":"8. Add Stop Line","text":"

Adding a Stop Line is necessary only when at the end of the configured Traffic Lane the Stop Line is present. If so, please select the correct Stop Line from the drop-down list.

Select Stop Line visually

If you have a lot of Stop Lines it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#9-add-speed-limit","title":"9. Add Speed Limit.","text":"

In the field called Speed Limit simply write the speed limit that is in effect on the configured Traffic Lane.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#10-set-the-right-of-ways","title":"10. Set the Right of Ways","text":"

To make the Right Of Ways list you configured earlier take effect simply click the 'Set RightOfWays' button.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/","title":"Add TrafficLights","text":""},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#add-trafficlights","title":"Add TrafficLights","text":"

To add TrafficLights into your Environment follow steps below.

Tip

In the Environment you are creating there will most likely be many TrafficLights that should look and work the same way. To simplify the process of creating an environment it is advised to create one TrafficLight of each type with this tutorial and then save them as prefabs that you will be able to reuse.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#1-add-trafficlight-object","title":"1. Add TrafficLight Object","text":"

Into your Map object in the Hierarchy view add a new Child Object and name it appropriately.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#2-add-a-mesh-filter-and-select-meshes","title":"2. Add a Mesh Filter and select meshes","text":"
  1. Click on the Add Component button.

  2. Search for Mesh filter and select it by clicking on it.

  3. For each TrafficLight specify the mesh you want to use.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#3add-a-mesh-renderer-and-specify-materials","title":"3.Add a Mesh Renderer and specify materials","text":"
  1. The same way as above search for Mesh Renderer and select it.

  2. Now you need to specify individual component materials.

    For example in the Traffic.Lights.001 mesh there are four sub-meshes that need their individual materials.

    To specify a material click on the selection button on Materials element and search for the material you want to use and select it.

    Repeat this process until you specify all materials. When you add one material more than there are sub-meshes you will see this warning. Then just remove the last material and the TrafficLight is prepared.

    !!! info Different material for every bulb is necessary for the color changing behavior that we expect from traffic lights. Even though in most cases you will use the same material for every Bulb, having them as different elements is necessary. Please only use models of TrafficLights that have different Materials Elements for every Bulb.

    !!! warning \"Materials order\" When specifying materials remember the order in which they are used in the mesh. Especially remember what Materials Elements are associated with every Bulb in the TrafficLight. This information will be needed later.

      !!! example\n      In the case of `Traffic.Lights.001` the bulb materials are ordered starting from the left side with index 1 and increasing to the right.\n\n      ![bulb 1](traffic_light_1_bulb.png)\n\n      ![bulb 2](traffic_light_2_bulb.png)\n\n      ![bulb 3](traffic_light_3_bulb.png)\n
"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#4-add-a-mesh-collider","title":"4. Add a Mesh Collider","text":"

The same way as above search for Mesh Collider and select it. Collider may not seem useful, as the TrafficLight in many cases will be out of reach of vehicles. It is however used for LiDAR simulation, so it is advised to always add colliders to Objects that should be detected by LiDARs.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#5-position-trafficlight-in-the-environment","title":"5. Position TrafficLight in the Environment","text":"

Finally after configuring all visual aspects of the TrafficLight you can position it in the environment. Do this by dragging a TrafficLight with a square representing a plane or with an arrow representing one axis.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#6-add-trafficlight-script","title":"6. Add TrafficLight Script","text":"

The Traffic Light Script will enable you to control how the TrafficLight lights up and create sequences.

  1. Click on Add Component, search for the Traffic Light script and select it.

  2. You should see the Bulb Emission config already configured. These are the colors that will be used to light up the Bulbs in TrafficLight. You may adjust them to suit your needs.

  3. You will have to specify Bulb material config, in which you should add elements with fields: - Bulb Type - One of the predefined Bulb types that describes the Bulb (its color and pattern).

    - Material Index - Index of the material that you want to be associated with the Bulb Type. This is where you need to use the knowledge from earlier where we said you have to remember what Materials Element corresponds to which bulb sub-mesh.

    !!! example \"Bulb configuration example\" Here we specify an element Type as RED_BULB and associate it with Material that has an index 3. This will result in associating the right most bulb with the name RED_BULB. This information will be of use to us when specifying TrafficLights sequences.

      ![traffic light bulb config](traffic_light_bulb_config.gif)\n

Success

Once you have added TrafficLights to your Environment, you can start configuring RandomTraffic which will add moving vehicles to it! Details here.

"},{"location":"Components/Environment/CreatePCD/","title":"Create PCD","text":""},{"location":"Components/Environment/CreatePCD/#create-pcd","title":"Create PCD","text":"

This section

This section is still under development!

"},{"location":"Components/Environment/CreatePCD/#pointcloudmapper","title":"PointCloudMapper","text":""},{"location":"Components/Environment/CreatePCD/#description","title":"Description","text":"

PointCloudMapper is a tool for a vehicle based point cloud mapping in a simulation environment. It is very useful when you need a point cloud based on some location, but don't have the possibility to physically map the real place. Instead you can map the simulated environment.

"},{"location":"Components/Environment/CreatePCD/#required-data","title":"Required Data","text":"

To properly perform the mapping, make sure you have the following files downloaded and configured:

"},{"location":"Components/Environment/CreatePCD/#import-osm","title":"Import OSM","text":"
  1. Drag and drop an OSM file into Unity project.

  2. OSM file will be imported as OsmDataContainer.

"},{"location":"Components/Environment/CreatePCD/#setup-an-environment","title":"Setup an Environment","text":"

For mapping an Environment prefab is needed. The easiest way is to create a new Scene and import the Environment prefab into it. Details on how to do this can be found on this tutorial page.

"},{"location":"Components/Environment/CreatePCD/#setup-a-vehicle","title":"Setup a Vehicle","text":"

Create a Vehicle GameObject in the Hierarchy view.

"},{"location":"Components/Environment/CreatePCD/#add-visual-elements-optional","title":"Add visual elements (optional)","text":"

Add vehicle model by adding a Geometry Object as a child of Vehicle and adding all visual elements as children.

Visual elements

You can learn how to add visual elements and required components like Mesh Filter or Mesh Renderer in this tutorial.

"},{"location":"Components/Environment/CreatePCD/#add-a-camera-optional","title":"Add a Camera (optional)","text":"

Add a Camera component for enhanced visuals by adding a Main Camera Object as a child of Vehicle Object and attaching a Camera Component to it.

  1. Add a Main Camera Object.

  2. Add a Camera Component by clicking 'Add Component' button, searching for it and selecting it.

  3. Change the Transform for an even better visual experience.

    !!! note \"Camera preview\" Observe how the Camera preview changes when adjusting the transformation.

"},{"location":"Components/Environment/CreatePCD/#setup-vehicle-sensors-rgl","title":"Setup Vehicle Sensors (RGL)","text":"

This part of the tutorial shows how to add a LiDAR sensor using RGL.

RGL Scene Manager

Please make sure that RGLSceneManager is added to the scene. For more details and instruction how to do it please visit this tutorial page.

  1. Create an empty Sensors GameObject as a child of the Vehicle Object.

  2. Create a Lidar GameObject as a child of the Sensors Object.

  3. Attach Lidar Sensor (script) to previously created Lidar Object by clicking on the 'Add Component' button, searching for the script and selecting it.

    !!! note \"Point Cloud Visualization\" Please note that Point Cloud Visualization (script) will be added automatically with the Lidar Sensor (script).

  4. Configure LiDAR pattern, e.g. by selecting one of the available presets.

    !!! example \"Example Lidar Sensor configuration\"

    !!! note \"Gaussian noise\" Gaussian noise should be disabled to achieve a more accurate map.

  5. Attach RGL Mapping Adapter (script) to previously created Lidar Object by clicking on the 'Add Component' button, searching for the script and selecting it.

  6. Configure RGL Mapping Adapter - e.g. set Leaf Size for filtering.

    !!! example \"Example RGL Mapping Adapter configuration\"

    !!! note \"Downsampling\" Please note that downsampling is applied on the single LiDAR scans only. If you would like to filter merged scans use the external tool described below.

"},{"location":"Components/Environment/CreatePCD/#effect-of-leaf-size-to-point-cloud-data-pcd-generation","title":"Effect of Leaf Size to Point Cloud Data (PCD) generation","text":"

Downsampling aims to reduce PCD size which for large point clouds may achieve gigabytes in exchange for map details. It is essential to find the best balance between the size and acceptable details level.

A small Leaf Size results in a more detailed PCD, while a large Leaf Size could result in excessive filtering such that objects like buildings are not recorded in the PCD.

In the following examples, it can be observed that when a Leaf Size is 1.0, point cloud is very detailed. When a Leaf Size is 100.0, buildings are filtered out and results in an empty PCD. A Leaf Size of 10.0 results in a reasonable PCD in the given example.

Leaf Size = 1.0 Leaf Size = 10.0 Leaf Size = 100.0"},{"location":"Components/Environment/CreatePCD/#setup-pointcloudmapper","title":"Setup PointCloudMapper","text":"
  1. Create a PointCloudMapper GameObject in the Hierarchy view.

  2. Attach Point Cloud Mapper script to previously created Point Cloud Mapper Object by clicking on the 'Add Component' button, searching for the script and selecting it.

  3. Configure the Point Cloud Mapper fields:

    - Osm Container - the OSM file you imported earlier - World Origin - MGRS position of the origin of the scene

      !!! note \"World Origin coordinate system\"\n      Use [*ROS* coordinate system](../../Vehicle/AddNewVehicle/AddSensors/#coordinate-system-conversion) for *World Origin*, not Unity.\n

    - Capture Location Interval - Distance between consecutive capture points along lanelet centerline - Output Pcd File Path - Output relative path from Assets folder - Target Vehicle - The vehicle you want to use for point cloud capturing that you created earlier

    !!! example \"Example Point Cloud Mapper configuration\"

    !!! note \"Lanelet visualization\" It is recommended to disable Lanelet Visualizer by setting Material to None and Width equal to zero. Rendered Lanelet is not ignored by the LiDAR so it would be captured in the PCD.

"},{"location":"Components/Environment/CreatePCD/#effect-of-capture-location-interval-to-pcd-generation","title":"Effect of Capture Location Interval to PCD generation","text":"

If the Capture Location Interval is too small, it could result in a sparse PCD where some region of the map is captured well but the other regions aren't captured at all.

In the below example, Leaf Size of 0.2 was used. Please note that using a different combination of leaf size and Capture Location Interval may result in a different PCD.

Capture Location Interval = 6 Capture Location Interval = 20 Capture Location Interval = 100"},{"location":"Components/Environment/CreatePCD/#capture-and-generate-pcd","title":"Capture and Generate PCD","text":"

If you play simulation with a scene prepared with the steps above, PointCloudMapper will automatically start mapping. The vehicle will warp along centerlines by intervals of CaptureLocationInterval and capture point cloud data. PCD file will be written when you stop your scene or all locations in the route are captured.

If the Vehicle stops moving for longer and you see the following message in the bottom left corner - you can safely stop the scene.

The Point cloud *.pcd file is saved to the location you specified in the Point Cloud Mapper.

"},{"location":"Components/Environment/CreatePCD/#pcd-postprocessing","title":"PCD postprocessing","text":"

Install required tool

The tool (DownsampleLargePCD) required for PCD conversion can be found under the link. README contains building instruction and usage.

The generated PCD file is typically too large. Therefore you need to downsample it. Also, it should be converted to ASCII format because Autoware accepts only this format. PointCloudMapper returns PCD in binary format.

  1. Change the working directory to the location with DownsampleLargePCD tool.
  2. Use this tool to downsample and save PCD in ASCII format.

    ./DownsampleLargePCD -in <PATH_TO_INPUT_PCD> -out <PATH_TO_OUTPUT_PCD> -leaf 0.2,0.2,0.2\n
    - Assuming input PCD is in your working directory and named in_cloud.pcd and output PCD is to be named out_cloud.pcd the command will be:
    ./DownsampleLargePCD -in in_cloud.pcd -out out_cloud.pcd -leaf 0.2,0.2,0.2\n
    - You can also save PCD in binary format by adding -binary 1 option.
  3. Your PCD is ready to use.

Converting PCD format without downsampling

If you don't want to downsample your PCD you can convert PCD file to ASCII format with pcl_convert_pcd_ascii_binary tool. This tool is available in the pcl-tools package and can be installed on Ubuntu with the following command:

sudo apt install pcl-tools\n
To convert your PCD use command:
pcl_convert_pcd_ascii_binary <PATH_TO_INPUT_PCD> <PATH_TO_OUTPUT_PCD> 0\n
"},{"location":"Components/Environment/CreatePCD/#verify-the-pcd","title":"Verify the PCD","text":"

To verify your PCD you can launch the Autoware with the PCD file specified.

  1. Copy your PCD from the AWSIM project directory to the Autoware map directory.

    cp <PATH_TO_PCD_FILE> <PATH_TO_AUTOWARE_MAP>/\n
  2. Source the ROS and Autoware

    source /opt/ros/humble/setup.bash\nsource <PATH_TO_AUTOWARE>/install/setup.bash\n
  3. Launch the planning simulation with the map directory path (map_path) and PCD file (pointcloud_map_file) specified.

    !!! note \"PCD file location\" The PCD file needs to be located in the Autoware map directory and as a pointcloud_map_file parameter you only supply the file name, not the path.

    !!! warning \"Absolute path\" When launching Autoware never use ~/ to specify the home directory. Either write the full absolute path ot use $HOME environmental variable.

    ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit map_path:=<ABSOLUTE_PATH_TO_AUTOWARE_MAP> pointcloud_map_file:=<PCD_FILE_NAME>\n
  4. Wait for the Autoware to finish loading and inspect the PCD visually given the Effect of Leaf Size and Effect of Capture Location Interval.

"},{"location":"Components/Environment/CreatePCD/#sample-scene","title":"Sample Scene","text":"

PointCloudMapping.unity is a sample scene for PointCloudMapper showcase. It requires setup of OSM data and 3D model map of the area according to the steps above.

Sample Mapping Scene

In this example you can see a correctly configured Point Cloud Mapping Scene.

"},{"location":"Components/Environment/LaneletBoundsVisualizer/","title":"LaneletBoundsVisualizer","text":""},{"location":"Components/Environment/LaneletBoundsVisualizer/#lanelet-bounds-visualizer","title":"Lanelet Bounds Visualizer","text":"

Lanelet Bounds Visualizer is an Unity Editor extension allowing the user to load the left and right bounds of Lanelet to the Unity scene.

"},{"location":"Components/Environment/LaneletBoundsVisualizer/#usage","title":"Usage","text":"

The lanelet bounds load process can be performed by opening AWSIM -> Visualize -> Load Lanelet Bounds at the top toolbar of Unity Editor.

A window shown below will pop up. Select your Osm Data Container to specify which OSM data to load the Lanelet from.

The user can select whether to load the raw Lanelet or to adjust the resolution of the Lanelet by specifying the waypoint settings.

To load the raw Lanelet, simply click the Load Raw Lanelet button.

If the user wishes to change the resolution of the Lanelet, adjust the parameters of the Waypoint Settings as described below, and click the Load with Waypoint Settings button.

Once the Lanelet is successfully loaded, Lanelet bounds will be generated as a new GameObject named LaneletBounds.

To visualize the LaneletBounds, make sure Gizmos is turned on and select the LaneletBounds GameObject.

"},{"location":"Components/Environment/LaneletBoundsVisualizer/#important-notes","title":"Important Notes","text":"

Generally speaking, visualizing Lanelet Bounds will result in a very laggy simulation. Therefore, it is recommended to hide the LaneletBounds GameObject when not used. The lag of the simulation becomes worse as you set the resolution of the Lanelet Bounds higher, so it is also recommended to set the resolution within a reasonable range.

It is also important to note that no matter how high you set the resolution to be, it will not be any better than the original Lanelet (i.e. the raw data). Rather, the computational load will increase and the simulation will become more laggy. If the user wishes to get the highest quality of Lanelet Bounds, it is recommended to use the Load Raw Lanelet button.

In short, Waypoint Setting parameters should be thought of as parameters to decrease the resolution from the original Lanelet to decrease the computational load and thus, reducing the lag of the simulation.

Higher Resolution Raw Lanelet Lower Resolution"},{"location":"Components/Environment/SmokeSimulator/","title":"SmokeSimulator","text":""},{"location":"Components/Environment/SmokeSimulator/#smoke-simulator","title":"Smoke Simulator","text":"

Simulating smoke in AWSIM may be useful when one wants to simulate exhaust gases from vehicles, smoke from emergency flare, etc.

In Unity, it is common to use Particle System to simulate smokes. However, smoke simulated by Particle System cannot be sensed by RGL in AWSIM although in reality, smokes are detected by LiDAR.

Smoke Simulator was developed to simulate smokes that can be detected by RGL in Unity. Smoke Simulator works by instantiating many small cubic GameObjects called Smoke Particles and allows each particle to be detected by RGL.

This document describes how to use the Smoke Simulator.

"},{"location":"Components/Environment/SmokeSimulator/#setting-smoke-simulator","title":"Setting Smoke Simulator","text":"

1.Create an empty GameObject

2.Attach SmokeGenerator.cs to the previously created GameObject.

3.Adjust the parameters of the SmokeGenerator as described below:

4.(Optional): You may also specify the Material of Smoke Particles. If this field is unspecified, a default material is used

"},{"location":"Components/Environment/SmokeSimulator/#example-of-different-smokes","title":"Example of Different Smokes","text":""},{"location":"Components/Environment/SmokeSimulator/#thin-smoke","title":"Thin Smoke","text":""},{"location":"Components/Environment/SmokeSimulator/#heavy-smoke","title":"Heavy Smoke","text":""},{"location":"Components/Environment/V2I/","title":"V2I","text":""},{"location":"Components/Environment/V2I/#v2i-vehicle-to-infrastructure","title":"V2I (Vehicle-to-Infrastructure)","text":"

V2I is a component that simulates V2I communication protocol which allows to exchange data between vehicles and road infrastructure. In the current version of AWSIM, the V2I component publishes information about traffic lights.

"},{"location":"Components/Environment/V2I/#how-to-add-v2i-to-the-environment","title":"How to add V2I to the environment","text":""},{"location":"Components/Environment/V2I/#assign-lanelet2-wayid-and-relationid-to-trafficlight-object","title":"Assign Lanelet2 WayID and RelationID to TrafficLight object","text":"
  1. Load items from lanelet2 following the instruction

  2. Verify if Traffic Light Lanelet ID component has been added to Traffic Light game objects.

  3. Verify if WayID and RelationID has been correctly assigned. You can use Vector Map Builder as presented below

"},{"location":"Components/Environment/V2I/#add-manually-traffic-light-lanelet-id-component-alternatively","title":"Add manually Traffic Light Lanelet ID component (alternatively)","text":"

If for some reason, Traffic Light Lanelet ID component is not added to Traffic Light object.

  1. Add component manually

  2. Fill Way ID

  3. Fill Relation ID

"},{"location":"Components/Environment/V2I/#add-v2i-prefab","title":"Add V2I prefab","text":""},{"location":"Components/Environment/V2I/#select-ego-transform","title":"Select EGO transform","text":""},{"location":"Components/Environment/V2I/#parameters","title":"Parameters","text":"Name Type Description Output Hz int Topic publication frequency Ego Vehicle Transform transform Ego Vehicle object transform Ego Distance To Traffic Signals double Maximum distance between Traffic Light and Ego Traffic Signal ID enum Possibility to select if as traffic_signal_id field in msg is Relation ID or Way ID Traffic Signals Topic string Topic name

Note

V2I feature can be used as Traffic Light ground truth information, and for that usage Way ID is supposed to be selected.

"},{"location":"Components/ROS2/AddACustomROS2Message/","title":"Add custom ROS2 msg","text":""},{"location":"Components/ROS2/AddACustomROS2Message/#add-a-custom-ros2-message","title":"Add a custom ROS2 message","text":"

If you want to use custom message in AWSIM, you need to generate the appropriate files, to do this you have to build ROS2ForUnity yourself - please follow the steps below. Remember to start with prerequisities though.

ROS2ForUnity role

For a better understanding of the role of ROS2ForUnity and the messages used, we encourage you to read this section.

custom_msgs

In order to simplify this tutorial, the name of the package containing the custom message is assumed to be custom_msgs - remember to replace it with the name of your package.

"},{"location":"Components/ROS2/AddACustomROS2Message/#prerequisites","title":"Prerequisites","text":"

ROS2ForUnity depends on a ros2cs - a C# .NET library for ROS2. This library is already included so you don't need to install it, but there are a few prerequisites that must be resolved first.

Please select your system and resolve all prerequisites:

UbuntuWindows "},{"location":"Components/ROS2/AddACustomROS2Message/#1-workspace-preparation","title":"1. Workspace preparation","text":"
  1. Clone ROS2ForUnity repository by execute command:

    === \"Ubuntu\"

    git clone https://github.com/RobotecAI/ros2-for-unity ~/\n
    !!! warning The cloned ROS 2 For Unity repository must be located in the home directory ~/. === \"Windows\"
    git clone https://github.com/RobotecAI/ros2-for-unity /C\n
    !!! warning The cloned ROS 2 For Unity repository must be located in the home directory C:\\.
  2. Pull dependent repositories by execute commands:

    === \"Ubuntu\"

    cd ~/ros2-for-unity\n. /opt/ros/humble/setup.bash\n./pull_repositories.sh\n

    === \"Windows\"

    cd C:\\ros2-for-unity\nC:\\ros2_humble\\local_setup.ps1\n.\\pull_repositories.ps1\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#2-setup-custom_msgs-package","title":"2. Setup custom_msgs package","text":"

The method to add a custom package to build depends on where it is located. The package can be on your local machine or just be hosted on a git repository. Please, choose the appropriate option and follow the instructions.

"},{"location":"Components/ROS2/AddACustomROS2Message/#21-package-contained-on-local-machine","title":"2.1. Package contained on local machine","text":"
  1. Copy the custom_msgs package with custom message to the folder to src/ros2cs/custom_messages directory

    === \"Ubuntu\"

    cp -r ~/custom_msgs ~/ros2-for-unity/src/ros2cs/custom_messages/\n

    === \"Windows\"

    Copy-Item 'C:\\custom_msgs' -Destination 'C:\\ros2-for-unity\\src\\custom_messages'\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#22-package-hosted-on-git-repository","title":"2.2. Package hosted on git repository","text":"
  1. Open ros2-for-unity/ros2_for_unity_custom_messages.repos file in editor.
  2. Modify the contents of the file shown below, uncomment and set:

    - <package_name> - to your package name - so in this case custom_msgs, - <repo_url> - to repository address, - <repo_branch> - to desired branch.

    repositories:\n#  src/ros2cs/custom_messages/<package_name>:\n#    type: git\n#    url: <repo_url>\n#    version: <repo_branch>\n

    !!! example Below is an example of a file configured to pull 2 packages (custom_msgs,autoware_auto_msgs) of messages hosted on a git repository.

    # NOTE: Use this file if you want to build with custom messages that reside in a separate remote repo.\n# NOTE: use the following format\n\nrepositories:\n    src/ros2cs/custom_messages/custom_msgs:\n        type: git\n        url: https://github.com/tier4/custom_msgs.git\n        version: main\n    src/ros2cs/custom_messages/autoware_auto_msgs:\n        type: git\n        url: https://github.com/tier4/autoware_auto_msgs.git\n        version: tier4/main\n
  3. Now pull the repositories again (also the custom_msgs package repository)

    === \"Ubuntu\"

    cd ~/ros2-for-unity\n./pull_repositories.sh\n

    === \"Windows\"

    cd C:\\ros2-for-unity\n.\\pull_repositories.ps1\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#3-build-ros-2-for-unity","title":"3. Build ROS 2 For Unity","text":"

Build ROS2ForUnity with custom message packages using the following commands:

UbuntuWindows
cd ~/ros2-for-unity\n./build.sh --standalone\n
cd C:\\ros2-for-unity\n.\\build.ps1 -standalone\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#4-install-custom_msgs-to-awsim","title":"4. Install custom_msgs to AWSIM","text":"

New ROS2ForUnity build, which you just made in step 3, contains multiple libraries that already exist in the AWSIM. To install custom_msgs and not copy all other unnecessary files, you should get the custom_msgs related libraries only.

You can find them in following directories and simply copy to the analogous directories in AWSIM/Assets/Ros2ForUnity folder, or use the script described here.

UbuntuWindows

- ros2-for-unity/install/asset/Ros2ForUnity/Plugins which names matches custom_msgs_* - ros2-for-unity/install/asset/Ros2ForUnity/Plugins/Windows/x86_64/ which names matches custom_msgs_*

"},{"location":"Components/ROS2/AddACustomROS2Message/#automation-of-copying-message-files","title":"Automation of copying message files","text":"UbuntuWindows

To automate the process, you can use a script that copies all files related to your custom_msgs package.

  1. Create a file named copy_custom_msgs.sh in directory ~/ros2-for-unity/ and paste the following content into it.
    #!/bin/bash\necho \"CUSTOM_MSGS_PACKAGE_NAME: $1\"\necho \"AWSIM_DIR_PATH: $2\"\nfind ./install/asset/Ros2ForUnity/Plugins -maxdepth 1 -name \"$1*\"    -type f -exec cp {} $2/Assets/Ros2ForUnity/Plugins \\;\nfind ./install/asset/Ros2ForUnity/Plugins/Linux/x86_64 -maxdepth 1 -name     \"lib$1*\" -type f -exec cp {} $2/Assets/Ros2ForUnity/Plugins/Linux/x86_64 \\;\n
  2. Save the file and give it executable rights with the command:
    chmod a+x copy_msgs.sh\n
  3. Run the script with two arguments:
    ./copy_custom_msgs.sh <CUSTOM_MSGS_PACKAGE_NAME> <AWSIM_DIR_PATH>\n

Example

./copy_custom_msgs.sh custom_msgs ~/unity/AWSIM/\n

To automate the process, you can use these commands with changed:

Example

Get-ChildItem C:\\ros2-for-unity\\install\\asset\\Ros2ForUnity\\Plugins\\* -Include @('custom_msgs*') | Copy-Item -Destination C:\\unity\\AWSIM\\Assets\\Ros2ForUnity\\Plugins\nGet-ChildItem C:\\ros2-for-unity\\install\\asset\\Ros2ForUnity\\Plugins\\Windows\\x86_64\\* -Include @('custom_msgs*') | Copy-Item -Destination C:\\unity\\AWSIM\\Assets\\Ros2ForUnity\\Plugins\\Windows\\x86_64\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#5-test","title":"5. Test","text":"

Make sure that the package files custom_msgs have been properly copied to the AWSIM/Assets/Ros2ForUnity. Then try to create a message object as described in this section and check in the console of Unity Editor if it compiles without errors.

"},{"location":"Components/ROS2/ROS2ForUnity/","title":"ROS2 For Unity","text":""},{"location":"Components/ROS2/ROS2ForUnity/#ros2-for-unity","title":"ROS2 For Unity","text":"

Ros2ForUnity (R2FU) module is a communication solution that effectively connects Unity and the ROS2 ecosystem, maintaining a strong integration. Unlike other solutions, it doesn't rely on bridging communication but rather utilizes the ROS2 middleware stack (specifically the rcl layer and below), enabling the inclusion of ROS2 nodes within Unity simulations.

R2FU is used in AWSIM for many reasons. First of all, because it offers high-performance integration between Unity and ROS2, with improved throughput and lower latencies compared to bridging solutions. It provides real ROS2 functionality for simulation entities in Unity, supports standard and custom messages, and includes convenient abstractions and tools, all wrapped as a Unity asset. For a detailed description, please see README.

"},{"location":"Components/ROS2/ROS2ForUnity/#prerequisites","title":"Prerequisites","text":"

This asset can be prepared in two flavours:

By default, asset R2FU in AWSIM is prepared in standalone mode.

Warning

To avoid internal conflicts between the standalone libraries, and sourced ones, ROS2 instance shouldn't be sourced before running AWSIM or the Unity Editor.

Can't see topics

There are no errors but I can't see topics published by R2FU

Try to stop it forcefully (pkill -9 ros2_daemon) and restart (ros2 daemon start).

"},{"location":"Components/ROS2/ROS2ForUnity/#concept","title":"Concept","text":"

Describing the concept of using R2FU in AWSIM, we distinguish:

The SimulatorROS2Node implementation, thanks to the use of R2FU, allows you to add communication via ROS2 to any Unity component. For example, we can receive control commands from any other ROS2 node and publish the current state of Ego, such as its position in the environment.

Simulation time

If you want to use system time (ROS2 time) instead of Unity time, use ROS2TimeSource instead of UnityTimeSource in the SimulatorROS2Node class.

"},{"location":"Components/ROS2/ROS2ForUnity/#package-structure","title":"Package structure","text":"

Ros2ForUnity asset contains:

"},{"location":"Components/ROS2/ROS2ForUnity/#scripts","title":"Scripts","text":""},{"location":"Components/ROS2/ROS2ForUnity/#extension-scripts","title":"Extension Scripts","text":"

Additionally, in order to adapt AWSIM to the use of R2FU, the following scripts are used:

"},{"location":"Components/ROS2/ROS2ForUnity/#default-message-types","title":"Default message types","text":"

The basic ROS2 msgs types that are supported in AWSIM by default include:

In order for the message package to be used in Unity, its *.dll and *.so libraries must be generated using R2FU.

Custom message

If you want to generate a custom message to allow it to be used in AWSIM please read this tutorial.

"},{"location":"Components/ROS2/ROS2ForUnity/#use-of-generated-messages-in-unity","title":"Use of generated messages in Unity","text":"

Each message type is composed of other types - which can also be a complex type. All of them are based on built-in C# types. The most common built-in types in messages are bool, int, double and string. These types have their communication equivalents using ROS2.

A good example of a complex type that is added to other complex types in order to specify a reference - in the form of a timestamp and a frame - is std_msgs/Header. This message has the following form:

builtin_interfaces/msg/Time stamp\nstring frame_id\n

ROS2 directive

In order to work with ROS2 in Unity, remember to add the directive using ROS2; at the top of the file to import types from this namespace.

"},{"location":"Components/ROS2/ROS2ForUnity/#create-an-object","title":"Create an object","text":"

The simplest way to create an object of Header type is:

var header = new std_msgs.msg.Header()\n{\n    Frame_id = \"map\"\n}\n

It is not required to define the value of each field. As you can see, it creates an object, filling only frame_id field - and left the field of complex builtin_interfaces/msg/Time type initialized by default. Time is an important element of any message, how to fill it is written here.

"},{"location":"Components/ROS2/ROS2ForUnity/#accessing-and-filling-in-message-fields","title":"Accessing and filling in message fields","text":"

As you might have noticed in the previous example, a ROS2 message in Unity is just a structure containing the same fields - keep the same names and types. Access to its fields for reading and filling is the same as for any C# structure.

var header2 = new std_msgs.msg.Header();\nheader2.Frame_id = \"map\";\nheader2.Stamp.sec = \"1234567\";\nDebug.Log($\"StampSec: {header2.Stamp.sec} and Frame: {header2.Frame_id}\");\n

Field names

There is one always-present difference in field names. The first letter of each message field in Unity is always uppercase - even if the base ROS2 message from which it is generated is lowercase.

"},{"location":"Components/ROS2/ROS2ForUnity/#filling-a-time","title":"Filling a time","text":"

In order to complete the time field of the Header message, we recommend the following methods in AWSIM:

  1. When the message has no Header but only the Time type:

    var header2 = new std_msgs.msg.Header();\nheader2.Stamp = SimulatorROS2Node.GetCurrentRosTime();\n
  2. When the message has a Header - like for example autoware_auto_vehicle_msgs/VelocityReport:

    velocityReportMsg = new autoware_auto_vehicle_msgs.msg.VelocityReport()\n{\n    Header = new std_msgs.msg.Header()\n    {\n        Frame_id = \"map\",\n    }\n};\nvar velocityReportMsgHeader = velocityReportMsg as MessageWithHeader;\nSimulatorROS2Node.UpdateROSTimestamp(ref velocityReportMsgHeader);\n

These methods allow to fill the Time field in the message object with the simulation time - from ROS2Clock

"},{"location":"Components/ROS2/ROS2ForUnity/#create-a-message-with-array","title":"Create a message with array","text":"

Some message types contain an array of some type. An example of such a message is nav_msgs/Path, which has a PoseStamped array. In order to fill such an array, you must first create a List<T>, fill it and then convert it to a raw array.

var posesList = new List<geometry_msgs.msg.PoseStamped>();\nfor(int i=0; i<=5;++i)\n{\n    var poseStampedMsg = new geometry_msgs.msg.PoseStamped();\n    poseStampedMsg.Pose.Position.X = i;\n    poseStampedMsg.Pose.Position.Y = 5-i;\n    var poseStampedMsgHeader = poseStampedMsg as MessageWithHeader;\n    SimulatorROS2Node.UpdateROSTimestamp(ref poseStampedMsgHeader);\n    posesList.Add(poseStampedMsg);\n}\nvar pathMsg = new nav_msgs.msg.Path(){Poses=posesList.ToArray()};\nvar pathMsgHeader = pathMsg as MessageWithHeader;\nSimulatorROS2Node.UpdateROSTimestamp(ref pathMsgHeader);\n// pathMsg is ready\n
"},{"location":"Components/ROS2/ROS2ForUnity/#publish-on-the-topic","title":"Publish on the topic","text":"

In order to publish messages, a publisher object must be created. The static method CreatePublisher of the SimulatorROS2Node makes it easy. You must specify the type of message, the topic on which it will be published and the QoS profile. Below is an example of autoware_auto_vehicle_msgs.msg.VelocityReport type message publication with a frequency of 30Hz on /vehicle/status/velocity_status topic, the QoS profile is (Reliability=Reliable, Durability=Volatile, History=Keep last, Depth=1):

using UnityEngine;\nusing ROS2;\n\nnamespace AWSIM\n{\n    public class VehicleReportRos2Publisher : MonoBehaviour\n    {\n        float timer = 0;\n        int publishHz = 30;\n        QoSSettings qosSettings = new QoSSettings()\n        {\n            ReliabilityPolicy = ReliabilityPolicy.QOS_POLICY_RELIABILITY_RELIABLE,\n            DurabilityPolicy = DurabilityPolicy.QOS_POLICY_DURABILITY_VOLATILE,\n            HistoryPolicy = HistoryPolicy.QOS_POLICY_HISTORY_KEEP_LAST,\n            Depth = 1,\n        };\n        string velocityReportTopic = \"/vehicle/status/velocity_status\";\n        autoware_auto_vehicle_msgs.msg.VelocityReport velocityReportMsg;\n        IPublisher<autoware_auto_vehicle_msgs.msg.VelocityReport> velocityReportPublisher;\n\n        void Start()\n        {\n            // Create a message object and fill in the constant fields\n            velocityReportMsg = new autoware_auto_vehicle_msgs.msg.VelocityReport()\n            {\n                Header = new std_msgs.msg.Header()\n                {\n                    Frame_id = \"map\",\n                }\n            };\n\n            // Create publisher with specific topic and QoS profile\n            velocityReportPublisher = SimulatorROS2Node.CreatePublisher<autoware_auto_vehicle_msgs.msg.VelocityReport>(velocityReportTopic, qosSettings.GetQoSProfile());\n        }\n\n         bool NeedToPublish()\n        {\n            timer += Time.deltaTime;\n            var interval = 1.0f / publishHz;\n            interval -= 0.00001f;\n            if (timer < interval)\n                return false;\n            timer = 0;\n            return true;\n        }\n\n        void FixedUpdate()\n        {\n            // Provide publications with a given frequency\n            if (NeedToPublish())\n            {\n                // Fill in non-constant fields\n                velocityReportMsg.Longitudinal_velocity = 1.00f;\n                velocityReportMsg.Lateral_velocity = 0.00f;\n                velocityReportMsg.Heading_rate = 0.00f;\n\n                // Update Stamp\n                var velocityReportMsgHeader = velocityReportMsg as MessageWithHeader;\n                SimulatorROS2Node.UpdateROSTimestamp(ref velocityReportMsgHeader);\n\n                // Publish\n                velocityReportPublisher.Publish(velocityReportMsg);\n            }\n        }\n    }\n}\n
"},{"location":"Components/ROS2/ROS2ForUnity/#upper-limit-to-publish-rate","title":"Upper limit to publish rate","text":"

The above example demonstrates the implementation of the 'publish' method within the FixedUpdate Unity event method. However, this approach has certain limitations. The maximum output frequency is directly tied to the current value of Fixed TimeStep specified in the Project Settings. Considering that the AWSIM is targeting a frame rate of 60 frames per second (FPS), the current Fixed TimeStep is set to 1/60s. And this impose 60Hz as a limitation on the publish rate for any sensor, which is implemented within FixedUpdate method. In case a higher output frequency be necessary, an alternative implementation must be considered or adjustments made to the Fixed TimeStep setting in the Editor->Project Settings->Time.

The table provided below presents a list of sensors along with examples of topics that are constrained by the Fixed TimeStep limitation.

Object Topic GNSS Sensor /sensing/gnss/pose IMU Sensor /sensing/imu/tamagawa/imu_raw Traffic Camera /sensing/camera/traffic_light/image_raw Pose Sensor /awsim/ground_truth/vehicle/pose OdometrySensor /awsim/ground_truth/localization/kinematic_state LIDAR /sensing/lidar/top/pointcloud_raw Vehicle Status /vehicle/status/velocity_status

If the sensor or any other publishing object within AWSIM does not have any direct correlation with physics (i.e., does not require synchronization with physics), it can be implemented without using the FixedUpdate method. Consequently, this allows the bypass of upper limits imposed by the Fixed TimeStep.

The table presented below shows a list of objects that are not constrained by the Fixed TimeStep limitation.

Object Topic Clock /clock"},{"location":"Components/ROS2/ROS2ForUnity/#subscribe-to-the-topic","title":"Subscribe to the topic","text":"

In order to subscribe messages, a subscriber object must be created. The static method CreateSubscription of the SimulatorROS2Node makes it easy. You must specify the type of message, the topic from which it will be subscribed and the QoS profile. In addition, the callback must be defined, which will be called when the message is received - in particular, it can be defined as a lambda expression. Below is an example of std_msgs.msg.Bool type message subscription on /vehicle/is_vehicle_stopped topic, the QoS profile is \u201csystem default\u201d:

using UnityEngine;\nusing ROS2;\n\nnamespace AWSIM\n{\n    public class VehicleStoppedSubscriber : MonoBehaviour\n    {\n        QoSSettings qosSettings = new QoSSettings();\n        string isVehicleStoppedTopic = \"/vehicle/is_vehicle_stopped\";\n        bool isVehicleStopped = false;\n        ISubscription<std_msgs.msg.Bool> isVehicleStoppedSubscriber;\n\n        void Start()\n        {\n            isVehicleStoppedSubscriber = SimulatorROS2Node.CreateSubscription<std_msgs.msg.Bool>(isVehicleStoppedTopic, VehicleStoppedCallback, qosSettings.GetQoSProfile());\n        }\n\n        void VehicleStoppedCallback(std_msgs.msg.Bool msg)\n        {\n            isVehicleStopped = msg.Data;\n        }\n\n        void OnDestroy()\n        {\n            SimulatorROS2Node.RemoveSubscription<std_msgs.msg.Bool>(isVehicleStoppedSubscriber);\n        }\n    }\n}\n
"},{"location":"Components/ROS2/ROS2TopicList/","title":"ROS2 topic list","text":""},{"location":"Components/ROS2/ROS2TopicList/#ros2-topic-list","title":"ROS2 topic list","text":"

The following is a summary of the ROS2 topics that the AWSIM node subscribes to and publishes on.

Ros2ForUnity

AWSIM works with ROS2 thanks to the use of Ros2ForUnity - read the details here. If you want to generate a custom message to allow it to be used in AWSIM please read this tutorial.

"},{"location":"Components/ROS2/ROS2TopicList/#list-of-subscribers","title":"List of subscribers","text":"Category Topic Message type frame_id Hz QoS

Control

Ackermann Control /control/command/control_cmd autoware_auto_control_msgs/AckermannControlCommand - 60 Reliable, TransientLocal, KeepLast/1 Gear /control/command/gear_cmd autoware_auto_vehicle_msgs/GearCommand - 10 Reliable, TransientLocal, KeepLast/1 Turn Indicators /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/TurnIndicatorsCommand - 10 Reliable, TransientLocal, KeepLast/1 Hazard Lights /control/command/hazard_lights_cmd autoware_auto_vehicle_msgs/HazardLightsCommand - 10 Reliable, TransientLocal, KeepLast/1 Emergency /control/command/emergency_cmd tier4_vehicle_msgs/msg/VehicleEmergencyStamped - 60 Reliable, TransientLocal, KeepLast/1"},{"location":"Components/ROS2/ROS2TopicList/#list-of-publishers","title":"List of publishers","text":"Category Topic Message type frame_id Hz QoS

Clock

/clock rosgraph_msgs/Clock - 100 Best effort,Volatile,Keep last/1

Sensors

Camera /sensing/camera/traffic_light/camera_info sensor_msgs/CameraInfo traffic_light_left_camera/camera_link 10 Best effort,Volatile,Keep last/1 Camera /sensing/camera/traffic_light/image_raw sensor_msgs/Image traffic_light_left_camera/camera_link 10 Best effort,Volatile,Keep last/1 GNSS /sensing/gnss/pose geometry_msgs/Pose gnss_link 1 Reliable,Volatile,Keep last/1 GNSS /sensing/gnss/pose_with_covariance geometry_msgs/PoseWithCovarianceStamped gnss_link 1 Reliable,Volatile,Keep last/1 IMU /sensing/imu/tamagawa/imu_raw sensor_msgs/Imu tamagawa/imu_link 30 Reliable,Volatile,Keep last/1000 Top LiDAR /sensing/lidar/top/pointcloud_raw sensor_msgs/PointCloud2 sensor_kit_base_link 10 Best effort,Volatile,Keep last/5 Top LiDAR /sensing/lidar/top/pointcloud_raw_ex sensor_msgs/PointCloud2 sensor_kit_base_link 10 Best effort,Volatile,Keep last/5

Vehicle Status

Velocity /vehicle/status/velocity_status autoware_auto_vehicle_msgs/VelocityReport base_line 30 Reliable,Volatile,Keep last/1 Steering /vehicle/status/steering_status autoware_auto_vehicle_msgs/SteeringReport - 30 Reliable,Volatile,Keep last/1 Control Mode /vehicle/status/control_mode autoware_auto_vehicle_msgs/ControlModeReport - 30 Reliable,Volatile,Keep last/1 Gear /vehicle/status/gear_status autoware_auto_vehicle_msgs/GearReport - 30 Reliable,Volatile,Keep last/1 Turn Indicators /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/TurnIndicatorsReport - 30 Reliable,Volatile,Keep last/1 Hazard Lights /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs/HazardLightsReport - 30 Reliable,Volatile,Keep last/1

Ground Truth

Pose /awsim/ground_truth/vehicle/pose geometry_msgs/PoseStamped base_link 100 Reliable,Volatile,Keep last/1"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/","title":"Preparing the connection between AWSIM and scenario_simulator_v2","text":""},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#preparing-the-connection-between-awsim-and-scenario_simulator_v2","title":"Preparing the connection between AWSIM and scenario_simulator_v2","text":"

This tutorial describes: - how to modify scenario to work with AWSIM - how to prepare the AWSIM scene to work with scenario_simulator_v2

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#scenario-preparation-to-work-with-awsim","title":"Scenario preparation to work with AWSIM","text":"

To prepare the scenario to work with AWSIM add model3d field to entity specification

It is utilized as an asset key to identify the proper prefab.

Match the parameters of the configured vehicle to match the entities parameters in AWSIM as close as it is required. Especially the bounding box is crucial to validate the collisions correctly.

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#default-awsim-asset-catalog","title":"Default AWSIM asset catalog","text":"

AWSIM currently supports the following asset key values.

The list can be extended if required. Appropriate values should be added to asst key list in the ScenarioSimulatorConnector component and the vehicle parameters in scenario simulator should match them.

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#ego-vehicle-entity-with-sensor","title":"Ego Vehicle Entity (with sensor)","text":"model3d boundingbox size (m) wheel base(m) front tread(m) rear tread(m) tier diameter(m) max steer(deg) lexus_rx450h width : 1.920 height : 1.700 length : 4.890 2.105 1.640 1.630 0.766 35"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#npc-vehicle-entity","title":"NPC Vehicle Entity","text":"model3d boundingbox size (m) wheel base(m) front tread(m) rear tread(m) tier diameter(m) max steer(deg) taxi width : 1.695 height : 1.515 length : 4.590 2.680 1.460 1.400 0.635 35 truck_2t width : 1.695 height : 1.960 length : 4.685 2.490 1.395 1.240 0.673 40 hatchback width : 1.695 height 1.515 length : 3.940 2.550 1.480 1.475 0.600 35 van width : 1.880 height : 2.285 length : 4.695 2.570 1.655 1.650 0.600 35 small_car width : 1.475 height 1.800 length : 3.395 2.520 1.305 1.305 0.557 35"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#npc-pedestrian-entity","title":"NPC Pedestrian Entity","text":"model3d boundingbox size (m) human width : 0.400 height : 1.800 length : 0.300"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#misc-object-entity","title":"Misc Object Entity","text":"model3d boundingbox size (m) sign_board width : 0.31 height : 0.58 length : 0.21"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#scenarios-limitations","title":"Scenarios limitations","text":"

Vast majority of features supported by scenario_simulator_v2 are supported with AWSIM as well. Currently supported features are described in the scenario_simulator_v2's documentation.

Features which are not supported when connected with AWSIM are listed below.

  1. Controller properties used by attach_*_sensor - pointcloudPublishingDelay - isClairvoyant - detectedObjectPublishingDelay - detectedObjectPositionStandardDeviation - detectedObjectMissingProbability - randomSeed

If those features are curcial for the scenario's execution, the scenario might not work properly.

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#awsim-scene-preparation-to-work-with-scenario_simulator_v2","title":"AWSIM scene preparation to work with scenario_simulator_v2","text":"
  1. Disable or remove random traffic and any pre-spawned NPCs
  2. Disable or remove V2I traffic lights publishing
  3. Disable or remove the clock publisher

  4. Add ScenarioSimulatorConnector prefab to the scene - located in Assets/ScenarioSimulatorConnector

  5. Add Ego Follow Camera object - most likely Main Camera

  6. If necessary update the asset_id to prefab mapping - key in the map can be used in the scenario

  7. Add TimeSourceSelector prefab to the scene - located in Assets/AWSIM/Scripts/Clock/Prefabs

  8. Configure Type in the TimeSourceSelector component to SS2

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/","title":"Setup Unity project for scenario simulation","text":""},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#running-awsim-from-unity-editor-with-scenario_simulator_v2","title":"Running AWSIM from Unity Editor with scenario_simulator_v2","text":"

Below you can find instructions on how to setup the scenario execution using scenario_simulator_v2 with AWSIM run from Unity Editor as a simulator The instruction assumes using the Ubuntu OS.

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#prerequisites","title":"Prerequisites","text":"
  1. Build Autoware by following \"Build Autoware with scenario_simulator_v2\" section from the scenario simulator and AWSIM quick start guide

  2. Follow Setup Unity Project tutorial

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#running-the-demo","title":"Running the demo","text":"
  1. Open AutowareSimulationScenarioSimulator.unity scene placed under Assets/AWSIM/Scenes/Main directory
  2. Run the simulation by clicking Play button placed at the top section of Editor.
  3. Launch scenario_test_runner.

    source install/setup.bash\nros2 launch scenario_test_runner scenario_test_runner.launch.py                        \\\narchitecture_type:=awf/universe  record:=false                                         \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim.yaml'          \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                          \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\" \\\ninitialize_duration:=260 port:=8080\n

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#other-sample-scenarios","title":"Other sample scenarios","text":""},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#conventional-traffic-lights-demo","title":"Conventional traffic lights demo","text":"

This scenario controls traffic signals in the scene based on OpenSCENARIO. It can be used to verify whether traffic light recognition pipeline works well in Autoware.

ros2 launch scenario_test_runner scenario_test_runner.launch.py                                           \\\narchitecture_type:=awf/universe  record:=false                                                            \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim_conventional_traffic_lights.yaml' \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                                             \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\"                    \\\ninitialize_duration:=260 port:=8080\n
"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#v2i-traffic-lights-demo","title":"V2I traffic lights demo","text":"

This scenario publishes V2I traffic signals information based on OpenSCENARIO. It can be used to verify Autoware responds to V2I traffic lights information correctly.

ros2 launch scenario_test_runner scenario_test_runner.launch.py                                  \\\narchitecture_type:=awf/universe  record:=false                                                   \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim_v2i_traffic_lights.yaml' \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                                    \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\"           \\\ninitialize_duration:=260 port:=8080\n
"},{"location":"Components/Sensors/CameraSensor/","title":"Camera Sensor","text":""},{"location":"Components/Sensors/CameraSensor/#camerasensor","title":"CameraSensor","text":""},{"location":"Components/Sensors/CameraSensor/#introduction","title":"Introduction","text":"

CameraSensor is a component that simulates an RGB camera. Autonomous vehicles can be equipped with many cameras used for various purposes. In the current version of AWSIM, the camera is used primarily to provide the image to the traffic light recognition module in Autoware.

"},{"location":"Components/Sensors/CameraSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/CameraSensor.prefab\n
"},{"location":"Components/Sensors/CameraSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

The mentioned single CameraSensor has its own frame traffic_light_left_camera/camera_link in which its data is published. The sensor prefab is added to this frame. The traffic_light_left_camera/camera_link link is added to the base_link object located in the URDF.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/CameraSensor/#camerasensorholder-script","title":"CameraSensorHolder (script)","text":"

CameraSensorHolder (script) allows the sequential rendering of multiple camera sensors. To utilize it, each CameraSensor object should be attached as a child object of the CameraSensorHolder.

"},{"location":"Components/Sensors/CameraSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/CameraSensor/#camerasensor-components","title":"CameraSensor Components","text":"

For the CameraSensor to work properly, the GameObject to which the scripts are added must also have:

TrafficLights recognition

In case of problems with the recognition of traffic lights in Autoware, it may help to increase the image resolution and focal length of the camera in AWSIM.

Camera settings

If you would like to adjust the image captured by the camera, we encourage you to read this manual.

The CameraSensor functionality is split into two scripts:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/Sensors/CameraSensor/*\n

In the same location there are also *.compute files containing used ComputeShaders.

"},{"location":"Components/Sensors/CameraSensor/#camerasensor-script","title":"CameraSensor (script)","text":"

Camera Sensor (script) is a core camera sensor component. It is responsible for applying OpenCV distortion and encoding to BGR8 format. The distortion model is assumed to be Plumb Bob. The script renders the image from the camera to Texture2D and transforms it using the distortion parameters. This image is displayed in the GUI and further processed to obtain the list of bytes in BGR8 format on the script output.

The script uses two ComputeShaders, they are located in the same location as the scripts:

API type feature DoRender void Renders the Unity camera, applies OpenCV distortion to rendered image and update output data."},{"location":"Components/Sensors/CameraSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/CameraSensor/#output-data","title":"Output Data","text":"

The sensor computation output format is presented below:

Category Type Description ImageDataBuffer byte[ ] Buffer with image data. CameraParameters CameraParameters Set of the camera parameters."},{"location":"Components/Sensors/CameraSensor/#cameraros2publisher-script","title":"CameraRos2Publisher (script)","text":"

Converts the data output from CameraSensor to ROS2 Image and CameraInfo type messages and publishes them. The conversion and publication is performed using the Publish(CameraSensor.OutputData outputData) method, which is the callback triggered by Camera Sensor (script) for the current output.

Due to the fact that the entire image is always published, the ROI field of the message is always filled with zeros. The script also ensures that binning is assumed to be zero and the rectification matrix is the identity matrix.

Warning

The script uses the camera parameters set in the CameraSensor script - remember to configure them depending on the camera you are using.

"},{"location":"Components/Sensors/CameraSensor/#elements-configurable-from-the-editor-level_2","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/CameraSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id Camera info /sensing/camera/traffic_light/camera_info sensor_msgs/CameraInfo traffic_light_left_camera/camera_link Camera image /sensing/camera/traffic_light/image_raw sensor_msgs/Image traffic_light_left_camera/camera_link"},{"location":"Components/Sensors/GNSSSensor/","title":"GNSS Sensor","text":""},{"location":"Components/Sensors/GNSSSensor/#gnsssensor","title":"GnssSensor","text":""},{"location":"Components/Sensors/GNSSSensor/#introduction","title":"Introduction","text":"

GnssSensor is a component which simulates the position of vehicle computed by the Global Navigation Satellite System based on the transformation of the GameObject to which this component is attached. The GnssSensor outputs the position in the MGRS coordinate system.

"},{"location":"Components/Sensors/GNSSSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/GnssSensor.prefab\n
"},{"location":"Components/Sensors/GNSSSensor/#link","title":"Link","text":"

GnssSensor has its own frame gnss_link in which its data is published. The sensor prefab is added to this frame. The gnss_link frame is added to the sensor_kit_base_link in the base_link object located in the URDF.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/GNSSSensor/#components","title":"Components","text":"

The GnssSensor functionality is split into two components:

Scripts can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/Gnss/*\n
"},{"location":"Components/Sensors/GNSSSensor/#gnss-sensor-script","title":"Gnss Sensor (script)","text":"

This is the main script in which all calculations are performed:

  1. the position of the Object in Unity is read,
  2. this position is transformed to the ROS2 coordinate system (MGRS offset is added here),
  3. the result of the transformation is saved as the output of the component,
  4. for the current output a callback is called (which can be assigned externally).
"},{"location":"Components/Sensors/GNSSSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/GNSSSensor/#output-data","title":"Output Data","text":"Category Type Description Position Vector3 Position in the MGRS coordinate system."},{"location":"Components/Sensors/GNSSSensor/#gnss-ros2-publisher-script","title":"Gnss Ros2 Publisher (script)","text":"

Converts the data output from GnssSensor to ROS2 PoseStamped and PoseWithCovarianceStamped messages. These messages are published on two separate topics for each type. The conversion and publication is performed using the Publish(GnssSensor.OutputData outputData) method, which is the callback triggered by Gnss Sensor (script) for the current output update.

Covariance matrix

The row-major representation of the 6x6 covariance matrix is filled with 0 and does not change during the script run.

"},{"location":"Components/Sensors/GNSSSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/GNSSSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id Pose /sensing/gnss/pose geometry_msgs/Pose gnss_link Pose with Covariance /sensing/gnss/pose_with_covariance geometry_msgs/PoseWithCovarianceStamped gnss_link"},{"location":"Components/Sensors/IMUSensor/","title":"IMU Sensor","text":""},{"location":"Components/Sensors/IMUSensor/#imusensor","title":"IMUSensor","text":""},{"location":"Components/Sensors/IMUSensor/#introduction","title":"Introduction","text":"

IMUSensor is a component that simulates an IMU (Inertial Measurement Unit) sensor. Measures acceleration (\\({m}/{s^2}\\)) and angular velocity (\\({rad}/{s}\\)) based on the transformation of the GameObject to which this component is attached.

"},{"location":"Components/Sensors/IMUSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/IMUSensor.prefab\n
"},{"location":"Components/Sensors/IMUSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

IMUSensor has its own frame tamagawa/imu_link in which its data is published. The sensor prefab is added to this frame. The tamagawa/imu_link link is added to the sensor_kit_base_link in the base_link object located in the URDF.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/IMUSensor/#components","title":"Components","text":"

The IMUSensor functionality is split into two scripts:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/Sensors/Imu/*\n
"},{"location":"Components/Sensors/IMUSensor/#imu-sensor-script","title":"IMU Sensor (script)","text":"

This is the main script in which all calculations are performed:

Warning

If the angular velocity about any axis is NaN (infinite), then angular velocity is published as vector zero.

"},{"location":"Components/Sensors/IMUSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/IMUSensor/#output-data","title":"Output Data","text":"Category Type Description LinearAcceleration Vector3 Measured acceleration (m/s^2) AngularVelocity Vector3 Measured angular velocity (rad/s)"},{"location":"Components/Sensors/IMUSensor/#imu-ros2-publisher-script","title":"Imu Ros2 Publisher (script)","text":"

Converts the data output from IMUSensor to ROS2 Imu type message and publishes it. The conversion and publication is performed using the Publish(IMUSensor.OutputData outputData) method, which is the callback triggered by IMU Sensor (script) for the current output.

Warning

In each 3x3 covariance matrices the row-major representation is filled with 0 and does not change during the script run. In addition, the field orientation is assumed to be {1,0,0,0} and also does not change.

"},{"location":"Components/Sensors/IMUSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/IMUSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id IMU data /sensing/imu/tamagawa/imu_raw sensor_msgs/Imu tamagawa/imu_link"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/","title":"Add New LiDAR","text":""},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#add-a-new-lidar","title":"Add a new LiDAR","text":"

RGLUnityPlugin (RGL) comes with a number of the most popular LiDARs model definitions and ready-to-use prefabs. However, there is a way to create your custom LiDAR. This section describes how to add a new LiDAR model that works with RGL, then create a prefab for it and add it to the scene.

Supported LiDARs

Not all lidar types are supported by RGL. Unfortunately, in the case of MEMs LiDARs, there is a non-repetitive phenomenon - for this reason, the current implementation is not able to reproduce their work.

"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#1-add-a-new-lidar-model","title":"1. Add a new LiDAR model","text":"

The example shows the addition of a LiDAR named NewLidarModel.

To add a new LiDAR model, perform the following steps:

  1. Navigate to Assets/RGLUnityPlugin/Scripts/LidarModels.

  2. Add its name to the LidarModels.cs at the end of the enumeration. The order of enums must not be changed to keep existing prefabs working.

  3. Now, it is time to define the laser (also called a channel) distribution of the LiDAR.

    !!! info If your LiDAR:

          - has a uniform laser distribution\n      - has the equal range for all of the lasers\n      - fire all of the rays (beams) at the same time\n\n  You can skip this step and use our helper method to generate a simple uniform laser array definition (more information in the next step).\n

    1. Laser distribution is represented by LaserArray consists of: - centerOfMeasurementLinearOffsetMm - 3D translation from the game object's origin to LiDAR's origin. Preview in 2D:

          <img src=\"img/LidarOriginParameter.png\" width=\"300\">\n\n  - `focalDistanceMm` - Distance from the sensor center to the focal point where all laser beams intersect.\n\n      <img src=\"img/LidarFocalDistanceParamter.png\" width=\"300\">\n\n  - `lasers` - array of lasers (channels) with a number of parameters:\n\n      - `horizontalAngularOffsetDeg` - horizontal angle offset of the laser (Azimuth)\n      - `verticalAngularOffsetDeg` - vertical angle offset of the laser (Elevation)\n      - `verticalLinearOffsetMm` - vertical offset of the laser (translation from origin)\n      - `ringId` - Id of the ring (in most cases laser Id)\n      - `timeOffset` - time offset of the laser firing in milliseconds (with reference to the first laser in the array)\n      - `minRange` - minimum range of the laser (set if lasers have different ranges)\n      - `maxRange` - maximum range of the laser (set if lasers have different ranges)\n

    1. To define a new laser distribution create a new class in the LaserArrayLibrary.cs

      ![lidar_array](img/LidarLaserArray.png)\n\n  - Add a new public static instance of `LaserArray` with the definition.\n\n  In this example, `NewLidarModel` laser distribution consists of 5 lasers with\n\n      - elevations: 15, 10, 0, -10, -15 degrees\n      - azimuths: 1.4, -1.4, 1.4, -1.4, 1.4 degrees\n      - ring Ids: 1, 2, 3, 4, 5\n      - time offsets: 0, 0.01, 0.02, 0.03, 0.04 milliseconds\n      - an equal range that will be defined later\n\n  !!! warning \"Coordinate system\"\n      Keep in mind that *Unity* has a left-handed coordinate system, while most of the *LiDAR's* manuals use a right-handed coordinate system. In that case, reverse sign of the values of the angles.\n
  4. The last step is to create a LiDAR configuration by adding an entry to LidarConfigurationLibrary.cs

    Add a new item to the ByModel dictionary that collects LiDAR model enumerations with their BaseLidarConfiguration choosing one of the implementations:

    - UniformRangeLidarConfiguration - lidar configuration for uniformly distributed rays along the horizontal axis with a uniform range for all the rays (it contains minRange and maxRange parameters additionally) - LaserBasedRangeLidarConfiguration - lidar configuration for uniformly distributed rays along the horizontal axis with ranges retrieved from lasers description - Or create your custom implementations in LidarConfiguration.cs like: - HesaiAT128LidarConfiguration - HesaiQT128C2XLidarConfiguration - HesaiPandar128E4XLidarConfiguration

    !!! note \"Lidar configuration parameters descrition\" Please refer to this section for the detailed description of all configuration parameters.

  5. Done. New LiDAR preset should be available via Unity Inspector.

    Frame rate of the LiDAR can be set in the Automatic Capture Hz parameter.

    Note: In the real-world LiDARs, frame rate affects horizontal resolution. Current implementation separates these two parameters. Keep in mind to change it manually.

"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#2-create-new-lidar-prefab","title":"2. Create new LiDAR prefab","text":"
  1. Create an empty object and name it appropriately according to the LiDAR model.
  2. Attach script LidarSensor.cs to created object.
  3. Set the new added LiDAR model in Model Preset field, check if the configuration loads correctly. You can now customize it however you like.
  4. (Optional) Attach script PointCloudVisualization.cs for visualization purposes.
  5. For publishing point cloud via ROS2 attach script RglLidarPublisher.cs script to created object.
  6. Set the topics on which you want the data to be published and their frame.
  7. Save the prefab in the project.
"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#3-test-your-prefab","title":"3. Test your prefab","text":"
  1. Create a new scene (remember to add the SceneManager) or use one of the existing sample scenes.
  2. Add the prepared LiDAR prefab by drag the prefab file and drop it into a scene.

  3. A LiDAR GameObject should be instantiated automatically

  4. Now you can run the scene and check how your LiDAR works.

Success

We encourage you to develop a vehicle using the new LiDAR you have added - learn how to do this here.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/","title":"LiDAR Sensor","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#lidarsensor","title":"LidarSensor","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#introduction","title":"Introduction","text":"

LidarSensor is the component that simulates the LiDAR (Light Detection and Ranging) sensor. LiDAR works by emitting laser beams that bounce off objects in the environment, and then measuring the time it takes for the reflected beams to return, allowing the sensor to create a 3D map of the surroundings. This data is used for object detection, localization, and mapping.

LiDAR in an autonomous vehicle can be used for many purposes. The ones mounted on the top of autonomous vehicles are primarily used

LiDARs placed on the left and right sides of the vehicle are mainly used to monitor the traffic lane and detect vehicles moving in adjacent lanes, enabling safe maneuvers such as lane changing or turning.

LidarSensor component is a part of RGLUnityPlugin that integrates the external RobotecGPULidar (RGL) library with Unity. RGL also allows to provide additional information about objects, more about it here.

Use RGL in your scene

If you want to use RGL in your scene, make sure the scene has an SceneManager component added and all objects meet the usage requirements.

RGL default scenes

If you would like to see how LidarSensor works using RGL or run some tests, we encourage you to familiarize yourself with the RGL test scenes section.

Supported LiDARs

The current scripts implementation allows you to configure the prefab for any mechanical LiDAR. You can read about how to do it here. MEMS-based LiDARs due to their different design are not yet fully supported.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#prefabs","title":"Prefabs","text":"

Prefabs can be found under the following path:

Assets/AWSIM/Prefabs/RobotecGPULidars/*\n

The table of available prefabs can be found below:

LiDAR Path Appearance HESAI Pandar40P HesaiPandar40P.prefab HESAI PandarQT64 HesaiPandarQT64.prefab HESAI PandarXT32 HesaiPandarXT32.prefab HESAI QT128C2X HesaiQT128C2X.prefab HESAI Pandar128E4X HesaiPandar128E4X.prefab HESAI AT128 E2X HesaiAT128E2X.prefab Ouster OS1-64 OusterOS1-64.prefab Velodyne VLP-16 VelodyneVLP16.prefab Velodyne VLC-32C VelodyneVLP32C.prefab Velodyne VLS-128-AP VelodyneVLS128.prefab"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

LidarSensor is configured in default vehicle EgoVehicle prefab. It is added to URDF object as a child of sensor_kit_base_link. LidarSensor placed in this way does not have its own frame, and the data is published relative to sensor_kit_base_link. More details about the location of the sensors in the vehicle can be found here.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

Additional LiDARs

For a LiDAR placed on the left side, right side or rear, an additional link should be defined.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#components-and-resources","title":"Components and Resources","text":"

The LiDAR sensor simulation functionality is split into three components:

Moreover, the scripts use Resources to provide configuration for prefabs of supported lidar models:

These are elements of the RGLUnityPlugin, you can read more here.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#lidar-sensor-script","title":"Lidar Sensor (script)","text":"

This is the main component that creates the RGL node pipeline for the LiDAR simulation. The pipeline consists of:

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#output-data","title":"Output Data","text":"

LidarSensor provides public methods to extend this pipeline with additional RGL nodes. In this way, other components can request point cloud processing operations and receive data in the desired format.

Example of how to get XYZ point cloud data:

  1. To obtain point cloud data from another component you have to create a new RGLNodeSequence with RGL node to yield XYZ field and connect it to LidarSensor:
    rglOutSubgraph = new RGLNodeSequence().AddNodePointsYield(\"OUT_XYZ\", RGLField.XYZ_F32);\nlidarSensor = GetComponent<LidarSensor>();\nlidarSensor.ConnectToWorldFrame(rglOutSubgraph); // you can also connect to Lidar frame using ConnectToLidarFrame\n// You can add a callback to receive a notification when new data is ready\nlidarSensor.onNewData += HandleLidarDataMethod;\n
  2. To get data from RGLNodeSequence call GetResultData:
    Vector3[] xyz = new Vector3[0];\nrglOutSubgraph.GetResultData<Vector3>(ref xyz);\n
"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#rgl-lidar-publisher-script","title":"Rgl Lidar Publisher (script)","text":"

RglLidarPublisher extends the main RGL pipeline created in LidarSensor with RGL nodes that produce point clouds in specific format and publish them to the ROS2 topic. Thanks to the ROS2 integration with RGL, point clouds can be published directly from the native library. RGL creates ROS2 node named /RobotecGPULidar with publishers generated by RGL nodes.

Currently, RglLidarPublisher implements two ROS2 publishers:

Details on the construction of these formats are available in the PointCloudFormats under the following path:

Assets/AWSIM/Scripts/Sensors/LiDAR/PointCloudFormats.cs\n

rosPCL48 format

For a better understanding of the rosPCL48 format, we encourage you to familiarize yourself with the point cloud pre-processing process in Autoware, which is described here.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id PointCloud 24-byte format /lidar/pointcloud sensor_msgs/PointCloud2 world PointCloud 48-byte format /lidar/pointcloud_ex sensor_msgs/PointCloud2 world"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#point-cloud-visualization-script","title":"Point Cloud Visualization (script)","text":"

A component visualizing a point cloud obtained from RGL in the form of a Vector3 list as colored points in the Unity scene. Based on the defined color table, it colors the points depending on the height at which they are located.

The obtained points are displayed as the vertices of mesh, and their coloring is possible thanks to the use of PointCloudMaterial material which can be found in the following path:

Assets/RGLUnityPlugin/Resources/PointCloudMaterial.mat\n

Point Cloud Visualization preview:

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#elements-configurable-from-the-editor-level_2","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#read-material-information","title":"Read material information","text":"

To ensure the publication of the information described in this section, GameObjects must be adjusted accordingly. This tutorial describes how to do it.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#intensity-texture","title":"Intensity Texture","text":"

RGL Unity Plugin allows assigning an Intensity Texture to the GameObjects to produce a point cloud containing information about the lidar ray intensity of hit. It can be used to distinguish different levels of an object's reflectivity.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#output-data_1","title":"Output data","text":"

Point cloud containing intensity is published on the ROS2 topic via RglLidarPublisher component. The intensity value is stored in the intensity field of the sensor_msgs/PointCloud2 message.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#instance-segmentation","title":"Instance segmentation","text":"

RGL Unity Plugin allows assigning an ID to GameObjects to produce a point cloud containing information about hit objects. It can be used for instance/semantic segmentation tasks. This tutorial describes how to do it.

LidarInstanceSegmentationDemo

If you would like to see how LidarInstanceSegmentationDemo works using RGL or run some tests, we encourage you to familiarize yourself with this section.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#output-data_2","title":"Output data","text":"

Point cloud containing hit objects IDs is published on the ROS2 topic via RglLidarPublisher component. It is disabled by default. Properties related to this feature are marked below:

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#dictionary-mapping","title":"Dictionary mapping","text":"

The resulting simulation data contains only the id of objects without their human-readable names. To facilitate the interpretation of such data, a function has been implemented to save a file with a dictionary mapping instance ID to GameObject names. It writes pairs of values in the yaml format:

To enable saving dictionary mapping set output file path to the Semantic Category Dictionary File property in the Scene Manager component:

The dictionary mapping file will be saved at the end of the simulation.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/","title":"RGLUnityPlugin","text":""},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#rglunityplugin","title":"RGLUnityPlugin","text":"

Robotec GPU Lidar (RGL) is an open source high performance lidar simulator running on CUDA-enabled GPUs. It is a cross-platform solution compatible with both Windows and Linux operating systems. RGL utilizes RTX cores for acceleration, whenever they are accessible.

RGL is used in AWSIM for performance reasons. Thanks to it, it is possible to perform a large number of calculations using the GPU, which is extremely helpful due to the size of the scenes. AWSIM is integrated with RGL out-of-the-box - using RGLUnityPlugin asset.

Warning

If you want to use RGL in your scene, make sure the scene has an RGLSceneManager component added and all objects meet the usage requirements.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#concept","title":"Concept","text":"

Describing the concept of using RGL in AWSIM, we distinguish:

Producing a point cloud is based on the use of a Scene containing Entities with Meshes, and placing an Ego Entity with LiDAR sensor that creates a Graph describing ray pattern and performing raytracing. In subsequent frames of the simulation, SceneManager synchronizes the scene between Unity and RGL, and LiDAR sensor updates rays pose on the scene and triggers Graph to perform raytracing and format desired output.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#package-structure","title":"Package structure","text":"

RGLUnityPlugin asset contains:

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#scripts","title":"Scripts","text":""},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#scenemanager","title":"SceneManager","text":"

Each scene needs SceneManager component to synchronize models between Unity and RGL. On every frame, it detects changes in the Unity's scene and propagates the changes to native RGL code. When necessary, it obtains 3D models from GameObjects on the scene, and when they are no longer needed, it removes them.

Three different strategies to interact with in-simulation 3D models are implemented. SceneManager uses one of the following policies to construct the scene in RGL:

Mesh Source Strategy Static Entity Animated Entity (NPC) Only Colliders Collider Collider Regular Meshes And Colliders Instead Of Skinned Regular Mesh Collider Regular Meshes And Skinned Meshes Regular Mesh Regular Mesh

Mesh source can be changed in the SceneManager script properties:

Performance

SceneManager performance depends on mesh source option selected.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#usage-requirements","title":"Usage requirements","text":"

Objects, to be detectable by RGL, must fulfill the following requirements:

  1. Contain one of the components: Collider, Mesh Renderer, or Skinned Mesh Renderer - it depends on SceneManager mesh source parameter.
  2. Be readable from CPU-accessible memory - it can be achieved using the Read/Write Enabled checkbox in mesh settings.

    !!! note \"Readable objects\" Primitive Objects are readable by default.

    !!! example The activated Readable option in the mesh should look like this.

      <img src=\"readable.png\" width=\"75%\">\n
"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/","title":"Read Material Information","text":"

RGL Unity Plugin allows to:

"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/#add-intensity-texture-assignment","title":"Add Intensity Texture assignment","text":"

To enable reading material information, add IntensityTexture component to every GameObject that is expected to have non-default intensity values.

After that desired texture has to be inserted into the Intensity Texture slot.

The texture has to be in R8 format. That means 8bit in the red channel (255 possible values).

When the texture is assigned, the intensity values will be read from the texture and added to the point cloud if and only if the mesh component in the GameObject has a set of properly created texture coordinates.

The expected number of texture coordinates is equal to the number of vertices in the mesh. The quantity of indices is not relevant. In other cases, the texture will be no read properly.

"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/#add-id-assignment","title":"Add ID assignment","text":"

To enable segmentation, add SemanticCategory component to every GameObject that is expected to have a distinct ID. All meshes that belong to a given object will inherit its ID. ID inheritance mechanism allows IDs to be overwritten for individual meshes/objects. This solution also enables the creation of coarse categories (e.g., Pedestrians, Vehicles)

Example

SemanticCategory component is assigned to the Taxi GameObject. All meshes in the Taxi GameObject will have the same instance ID as Taxi:*

Example

The driver has its own SemanticCategory component, so his instance ID will differ from the rest of the meshes:

Example

SemanticCategory component is assigned to the Vehicles GameObject that contains all of the cars on the scene:

"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/#dictionary-mapping","title":"Dictionary mapping","text":"

The resulting simulation data contains only the id of objects without their human-readable names. To facilitate the interpretation of such data, a function has been implemented to save a file with a dictionary mapping instance ID to GameObject names. It writes pairs of values in the yaml format:

To enable saving dictionary mapping set output file path to the Semantic Category Dictionary File property in the Scene Manager component:

The dictionary mapping file will be saved at the end of the simulation.

"},{"location":"Components/Sensors/VehicleStatusSensor/","title":"Vehicle Status Sensor","text":""},{"location":"Components/Sensors/VehicleStatusSensor/#vehiclestatussensor","title":"VehicleStatusSensor","text":""},{"location":"Components/Sensors/VehicleStatusSensor/#introduction","title":"Introduction","text":"

VehicleStatusSensor is a component that is designed to aggregate information about the current state of the vehicle. It aggregates information about:

"},{"location":"Components/Sensors/VehicleStatusSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/VehicleStatusSensor.prefab\n
"},{"location":"Components/Sensors/VehicleStatusSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

This sensor is added directly to the URDF link in the EgoVehicle prefab.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/VehicleStatusSensor/#components","title":"Components","text":"

All features are implemented within the Vehicle Report Ros2 Publisher (script) which can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/*\n
"},{"location":"Components/Sensors/VehicleStatusSensor/#vehicle-report-ros2-publisher-script","title":"Vehicle Report Ros2 Publisher (script)","text":"

The script is responsible for updating and publishing each of the aggregated data on a separate topic. Therefore, it has 6 publishers publishing the appropriate type of message with a constant frequency - one common for all data.

"},{"location":"Components/Sensors/VehicleStatusSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":"

Vehicle configuration

An important element of the script configuration that must be set is the scene Object (Vehicle). It will be used for reading all the data needed. The appropriate EgoVehicle object should be selected.

If you can't select the right object, make sure it's set up correctly - it has got added all the scripts needed for EgoVehicle.

"},{"location":"Components/Sensors/VehicleStatusSensor/#published-topics","title":"Published topics","text":" Category Topic Message type frame_id Control mode /vehicle/status/control_mode autoware_auto_vehicle_msgs/ControlModeReport - Gear status /vehicle/status/gear_status autoware_auto_vehicle_msgs/GearReport - Steering status /vehicle/status/steering_status autoware_auto_vehicle_msgs/SteeringReport - Turn indicators status /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/TurnIndicatorsReport - Hazard lights status /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs/HazardLightsReport - Velocity status /vehicle/status/velocity_status autoware_auto_vehicle_msgs/VelocityReport base_line"},{"location":"Components/Traffic/NPCs/Pedestrian/","title":"Pedestrian","text":""},{"location":"Components/Traffic/NPCs/Pedestrian/#introduction","title":"Introduction","text":"

NPCPedestrian is an object that simulates a human standing or moving on the scene. It can move cyclically in any chosen place thanks to the available scripts. Traffic light tracking will be implemented in the future.

Sample scene

If you would like to see how NPCPedestrian works or run some tests, we encourage you to familiarize yourself with the NPCPedestrianSample default scene described in this section.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#prefab-and-fbx","title":"Prefab and Fbx","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/NPCs/Pedestrians/humanElegant.prefab\n
"},{"location":"Components/Traffic/NPCs/Pedestrian/#visual-elements","title":"Visual elements","text":"

Prefab is developed using models available in the form of *.fbx file. From this file, the visual elements of the model, Animator and LOD were loaded. The Animator and LOD are added as components of the main-parent GameObject in prefab, while the visual elements of the model are added as its children.

*.fbx file can be found under the following path:

Assets/AWSIM/Models/NPCs/Pedestrians/Human/humanElegant.fbx\n

NPCPedestrian prefab has the following content:

The ReferencePoint is used by the NPC Pedestrian (script) described here.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#link-in-the-default-scene","title":"Link in the default Scene","text":"

Pedestrians implemented in the scene are usually added in one aggregating object - in this case it is NPCPedestrians. This object is added to the Environment prefab.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#components","title":"Components","text":"

There are several components responsible for the full functionality of NPCPedestrian:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/NPCs/Pedestrians/*\n
"},{"location":"Components/Traffic/NPCs/Pedestrian/#rigidbody","title":"Rigidbody","text":"

Rigidbody ensures that the object is controlled by the physics engine. In order to connect the animation to the object, the Is Kinematic option must be enabled. By setting Is Kinematic, each NPCPedestrian object will have no physical interaction with other objects - it will not react to a vehicle that hits it. The Use Gravity should be turned off - the correct position of the pedestrian in relation to the ground is ensured by the NPC Pedestrian (script). In addition, Interpolate should be turned on to ensure the physics engine's effects are smoothed out.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#lod-level-of-detail","title":"LOD (Level of Detail)","text":"

LOD provides dependence of the level of detail of the object depending on the ratio of the GameObject\u2019s screen space height to the total screen height. The pedestrian model has two object groups: suffixed LOD0 and LOD1. LOD0 objects are much more detailed than LOD1 - they have many more vertices in the Meshes. Displaying complex meshes requires more performance, so if the GameObject is a small part of the screen, less complex LOD1 objects are used.

In the case of the NPCPedestrian prefab, if its object is less than 25% of the height of the screen then objects with the LOD1 suffix are used. For values less than 1% the object is culled.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#animator","title":"Animator","text":"

Animator component provides animation assignments to a GameObject in the scene. It uses a developed Controller which defines which animation clips to use and controls when and how to blend and transition between them.

The AnimationController for humans should have the two float parameters for proper transitions. Transitions between animation clips are made depending on the values of these parameters:

Developed controller can be found in the following path: Assets/AWSIM/Models/NPCs/Pedestrians/Human/Human.controller

Walking to running transition

The example shows the state of walking and then transitions to running as a result of exceeding the condition \\(\\mathrm{moveSpeed} > 1.6\\)

"},{"location":"Components/Traffic/NPCs/Pedestrian/#npc-pedestrian-script","title":"NPC Pedestrian (script)","text":"

The script takes the Rigidbody and Animator components and combines them in such a way that the actual animation depends on the movement of Rigidbody. It provides an inputs that allows the pedestrian to move - change his position and orientation. In addition, the ReferencePoint point is used to ensure that the pedestrian follows the ground plane correctly.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Traffic/NPCs/Pedestrian/#input-data","title":"Input Data","text":"Category Type Description SetPosition Vector3 Move the NPCPedestrian so that the reference point is at the specified coordinates. SetRotation Vector3 Rotate the NPCPedestrian so that the orientation of the reference point becomes the specified one."},{"location":"Components/Traffic/NPCs/Pedestrian/#simple-pedestrian-walker-controller-script","title":"Simple Pedestrian Walker Controller (script)","text":"

Simple Pedestrian Walker Controller is a script that allows the pedestrian to cyclically move back and forth along a straight line. One-way motion is performed with a fixed time as parameter Duration and a constant linear velocity as parameter Speed. The script obviously uses the NPCPedestrian controls provided by the NPC Pedestrian (script) inputs.

Pedestrian walking on the sidewalk

"},{"location":"Components/Traffic/NPCs/Pedestrian/#collider","title":"Collider","text":"

Collider is an optional pedestrian component. By default, NPCPedestrian doesn't have this component added, It can be added if you want to detect a collision, e.g. with an EgoVehicle. There are several types of colliders, choose the right one and configure it for your own requirements.

Capsule Collider

An example of a CapsuleCollider that covers almost the entire pedestrian.

"},{"location":"Components/Traffic/NPCs/Vehicle/","title":"Vehicle","text":""},{"location":"Components/Traffic/NPCs/Vehicle/#npcvehicle","title":"NPCVehicle","text":""},{"location":"Components/Traffic/NPCs/Vehicle/#introduction","title":"Introduction","text":"

NPCVehicle is a non-playable object that simulates a vehicle that is stationary or moving around the scene. It can move on roads, more specifically TrafficLanes, thanks to the use of TrafficSimulator - which you can read more about here. Vehicles moving on the scene take into account each other - avoiding collisions, follow traffic lights and have an implemented mechanism of yielding the right of way.

Sample scene

If you would like to see how NPCVehicle works or run some tests, we encourage you to familiarize yourself with the NPCVehicleSample default scene described in this section.

Ego Vehicle

If you are interested in the most important vehicle on the scene - Ego Vehicle, we encourage you to read this section.

"},{"location":"Components/Traffic/NPCs/Vehicle/#prefabs-and-fbxs","title":"Prefabs and Fbxs","text":"

Prefabs can be found under the following path:

Assets/AWSIM/Prefabs/NPCs/Vehicles/*\n

The table shows the available prefabs of the vehicles:

Hatchback SmallCar Taxi Truck Van Appearance Prefab Hatchback.prefab SmallCar.prefab Taxi-64.prefab Truck_2t.prefab Van.prefab

NPCVehicle prefab has the following content:

As you can see, it consists of 2 parents for GameObjects: Visuals - aggregating visual elements, Colliders - aggregating colliders and single object CoM. All objects are described in the sections below.

"},{"location":"Components/Traffic/NPCs/Vehicle/#visual-elements","title":"Visual elements","text":"

Prefabs are developed using models available in the form of *.fbx files. For each vehicle, the visuals elements and LOD were loaded from the appropriate *.fbx file. The LOD is always added as components of the main-parent GameObject in prefab, while the visual elements of the model are aggregated and added in object Visuals.

*.fbx file for each vehicle is located in the appropriate Models directory for the vehicle under the following path:

Assets/AWSIM/Models/NPCs/Vehicles/<vehicle_name>/Models/<vehicle_name>.fbx\n

As you can see, the additional visual element is Driver.

It was also loaded from the *.fbx file which can be found under the following path:

Assets/AWSIM/Models/NPCs/Vehicles/Driver/Model/Driver.fbx\n

Vehicle fbx

The content of a sample *.fbx file is presented below, all elements except Collider have been added to the prefab as visual elements of the vehicle. Collider is used as the Mesh source for the Mesh Collider in the BodyCollider object.

.

"},{"location":"Components/Traffic/NPCs/Vehicle/#link","title":"Link","text":"

The default scene does not have vehicles implemented in fixed places, but they are spawned by RandomTrafficSimulator which is located in the Environment prefab. Therefore, before starting the simulation, no NPCVehicle object is on the scene.

When you run the simulation, you can see objects appearing as children of RandomTrafficSimulator:

In each NPCVehicle prefab, the local coordinate system of the vehicle (main prefab link) should be defined in the axis of the rear wheels projected onto the ground - in the middle of the distance between them. This aspect holds significance when characterizing the dynamics of the object, as it provides convenience in terms of describing its motion and control.

"},{"location":"Components/Traffic/NPCs/Vehicle/#components","title":"Components","text":"

There are several components responsible for the full functionality of NPCVehicle:

Script can be found under the following path:

Assets/AWSIM/Scripts/NPCs/Vehicles\n
"},{"location":"Components/Traffic/NPCs/Vehicle/#com","title":"CoM","text":"

CoM (Center of Mass) is an additional link that is defined to set the center of mass in the Rigidbody. The NPC Vehicle (script) is responsible for its assignment. This measure should be defined in accordance with reality. Most often, the center of mass of the vehicle is located in its center, at the height of its wheel axis - as shown below.

"},{"location":"Components/Traffic/NPCs/Vehicle/#colliders","title":"Colliders","text":"

Colliders are used to ensure collision between objects. In NPCVehicle, the main BodyCollider collider and Wheels Colliders colliders for each wheel were added.

"},{"location":"Components/Traffic/NPCs/Vehicle/#body-collider","title":"Body Collider","text":"

BodyCollider is a vehicle Object responsible for ensuring collision with other objects. Additionally it can be used to detect these collisions. The MeshCollider uses a Mesh of an Object to build its Collider. The Mesh for the BodyCollider was also loaded from the *.fbx file similarly to the visual elements.

"},{"location":"Components/Traffic/NPCs/Vehicle/#wheels-colliders","title":"Wheels Colliders","text":"

WheelsColliders are an essential element from the point of view of driving vehicles on the road. They are the only ones that have contact with the roads and it is important that they are properly configured. Each vehicle, apart from the visual elements related to the wheels, should also have 4 colliders - one for each wheel.

To prevent inspector entry for WheelCollider the WheelColliderConfig has been developed. It ensures that friction is set to 0 and only wheel suspension and collisions are enabled.

Wheel Collider Config

For a better understanding of the meaning of WheelCollider we encourage you to read this manual.

"},{"location":"Components/Traffic/NPCs/Vehicle/#lod","title":"LOD","text":"

LOD provides dependence of the level of detail of the object depending on the ratio of the GameObject\u2019s screen space height to the total screen height. Vehicle models have only one LOD0 group, therefore there is no reduction in model complexity when it does not occupy a large part of the screen. It is only culled when it occupies less than 2% of the height.

"},{"location":"Components/Traffic/NPCs/Vehicle/#rigidbody","title":"Rigidbody","text":"

Rigidbody ensures that the object is controlled by the physics engine. The Mass of the vehicle should approximate its actual weight. In order for the vehicle to physically interact with other objects - react to collisions, Is Kinematic must be turned off. The Use Gravity should be turned on - to ensure the correct behavior of the body during movement. In addition, Interpolate should be turned on to ensure the physics engine's effects are smoothed out.

"},{"location":"Components/Traffic/NPCs/Vehicle/#npc-vehicle-script","title":"NPC Vehicle (script)","text":"

The script takes the Rigidbody and provides an inputs that allows the NPCVehicle to move. Script inputs give the ability to set the position and orientation of the vehicle, taking into account the effects of suspension and gravity. In addition, the script uses the CoM link reference to assign the center of mass of the vehicle to the Rigidbody.

Script inputs are used by RandomTrafficSimulator, which controls the vehicles on the scene - it is described here.

"},{"location":"Components/Traffic/NPCs/Vehicle/#input-data","title":"Input Data","text":"Category Type Description SetPosition Vector3 Move the NPCVehicle so that its x, z coordinates are same as the specified coordinates. Pitch and roll are determined by physical operations that take effects of suspension and gravity into account. SetRotation Vector3 Rotate the NPCVehicle so that its yaw becomes equal to the specified one. Vertical movement is determined by physical operations that take effects of suspension and gravity into account.

Visual Object Root is a reference to the parent aggregating visuals, it can be used to disable the appearance of visual elements of the NPCVehicle in the scene.

Whereas Bounds Represents an axis aligned bounding box of the NPCVehicle. It is used primarily to detect collisions between vehicles in the event of spawning, yielding and others. Moreover, vehicle bounds are displayed by Gizmos.

The settings of the remaining elements, i.e. the Axle and the Lights, are described here and here.

No Gizmo visualization

If you don't see Gizmo's visual elements, remember to turn them on.

"},{"location":"Components/Traffic/NPCs/Vehicle/#axle-settings","title":"Axle Settings","text":"

This part of the settings is responsible for the proper connection of visual elements with the collider for each wheel - described earlier. The objects configured in this section are used to control the vehicle - its wheel speed and steering angle, which are calculated based on the input values. Correct configuration is very important from the point of view of the NPCVehicle movement on the road.

"},{"location":"Components/Traffic/NPCs/Vehicle/#lights-settings","title":"Lights Settings","text":"

This part of the settings is related to the configuration of materials emission - used when a specific lighting is activated. There are 3 types of lights: Brake, Left Turn Signal and Right Turn Signal. Each of the lights has its visual equivalent in the form of a Mesh. In the case of NPCVehicle all of the lights are included in the Body object Mesh, which has many materials - including those related to lights.

For each type of light, the appropriate Material Index (equivalent of element index in mesh) and Lighting Color are assigned - yellow for Turn Signals, red for Break.

Lighting Intensity values are also configured - the greater the value, the more light will be emitted. This value is related to Lighting Exposure Weight parameter that is an exposure weight - the lower the value, the more light is emitted.

The brake light is switched on depending on the speed of the NPCVehicle, while RandomTrafficSimulator is responsible for switching the turn signals on and off.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/","title":"Add Random Traffic Environment","text":""},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#add-environment-for-random-traffic","title":"Add Environment for Random Traffic","text":"

This document describes the steps to properly configuer RandomTrafficSimulator in your environment.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#map-preparation","title":"Map preparation","text":"

The 3D map model should be added to the scene. Please make sure that the Environment component with appropriate mgrsOffsetPosition is attached to the root GameObject.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-traffic-lights","title":"Annotate Traffic Lights","text":"

Please attach TrafficLight component to all traffic light GameObjects placed on scene.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#load-lanelet","title":"Load Lanelet","text":"

The lanelet load process can be performed by opening AWSIM -> Random Traffic -> Load Lanelet at the top toolbar of Unity Editor.

You should be prompted with a similar window to the one presented below. Please adjust the parameters for the loading process if needed.

Waypoint settings affect the density and accuracy of the generated waypoints. The parameters are described below:

To generate the Lanelet2 map representation in your simulation, please click the Load button. Environment components should be generated and placed as child objects of the Environment GameObject. You can check their visual representation by clicking consecutive elements in the scene hierarchy.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-traffic-intersections","title":"Annotate Traffic Intersections","text":"

To annotate intersection please, add an empty GameObject named TrafficIntersections at the same level as the TrafficLanes GameObject.

For each intersection repeat the following steps:

  1. Add an GameObject named TrafficIntersection as a child object of the TrafficIntersections object.
  2. Attach a TrafficIntersection component to it.
  3. Add a BoxCollider as a component of GameObject. It's size and position should cover the whole intersection. This is used for detecting vehicles in the intersection.
  4. Set TrafficLightGroups. Each group is controlled to have different signals, so facing traffic lights should be added to the same group. These groupings are used in traffic signal control.
  5. Specify the signal control pattern.
"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-right-of-ways-on-uncontrolled-intersections","title":"Annotate right of ways on uncontrolled intersections","text":"

For the vehicles to operate properly it is needed to annotate the right of way of TrafficLane manually on intersections without traffic lights.

To set the right of way, please:

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-stop-lines","title":"Annotate stop lines","text":"

For each right turn lane that yields to the opposite straight or left turn lane, a stop line needs to be defined near the center of the intersection. If there is no visible stop line, a StopLine component should be added to the scene, near the center of the intersection and associated with TrafficLane.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#assign-intersection-trafficlanes","title":"Assign Intersection TrafficLanes","text":"

To make the yielding rules work properly, it is necessary to catagorize the TrafficLanes. The ones that belong to an intersection have the IntersectionLane variable set to true.

To automate the assignment of the corresponding IntersectionLane to each TrafficLane, the script AssignIntersectionTrafficLanes can be used.

  1. At the time of assignment, add it as a component to some object in the scene (e.g. to the Environment object).
  2. Disable the component (uncheck the checkbox next to the script name).
  3. Assign to TrafficLanesObjectsParent GameObject, which contains all TrafficLanes objects.
  4. Check all 4 options.
  5. Enable the component (check the checkbox next to the script name).

Check the log to see if all operations were completed:

As a result, the names of TrafficLane objects should have prefixes with sequential numbers and TrafficLane at intersections should be marked. TrafficLanes with IntersectionLane set to True are displayed by Gizmos in green color, if IntersectionLane is False their color is white.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#check-final-configuration","title":"Check final configuration","text":"

Once all the components are ready, the simulation can be run. Check carefully if the vehicles are moving around the map correctly. For each intersection, review the settings of the relevant components if vehicles are unable to proceed.

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/","title":"Random Traffic Simulator","text":""},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#random-traffic-simulator","title":"Random Traffic Simulator","text":"

The RandomTrafficSimulator simulates city traffic with respect to all traffic rules. The system allows for random selection of car models and the paths they follow. It also allows adding static vehicles in the simulation.

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#getting-started","title":"Getting Started","text":""},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#overview","title":"Overview","text":"

The random traffic system consists of the following components:

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#components-settings","title":"Components Settings","text":"

The following section describes Unity Editor components settings.

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#random-traffic-simulator_1","title":"Random Traffic Simulator","text":"Parameter Description General Settings Seed Seed value for random generator Ego Vehicle Transform of ego vehicle Vehicle Layer Mask LayerMask that masks only vehicle(NPC and ego) colliders Ground Layer Mask LayerMask that masks only ground colliders of the map NPC Vehicle Settings Max Vehicle Count Maximum number of NPC vehicles to be spawned in simulation NPC Prefabs Prefabs representing controlled vehicles. They must have NPCVehicle component attached. Spawnable Lanes TrafficLane components where NPC vehicles can be spawned during traffic simulation Vehicle Config Parameters for NPC vehicle controlSudden Deceleration is a deceleration related to emergency braking Debug Show Gizmos Enable the checkbox to show editor gizmos that visualize behaviours of NPCs"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#gizmos","title":"Gizmos","text":"

Gizmos are useful for checking current behavior of NPCs and its causes. Gizmos have a high computational load so please disable them if the simulation is laggy.

"},{"location":"Components/Traffic/RandomTraffic/YieldingRules/","title":"Yielding Rules","text":""},{"location":"Components/Traffic/RandomTraffic/YieldingRules/#yielding-rules","title":"Yielding rules","text":"

The RandomTrafficSimulator assumes that there are 10 phases of yielding priority:

RandomTrafficYielding scene

If you would like to see how RandomTrafficSimulator with yielding rules works or run some tests, we encourage you to familiarize yourself with the RandomTrafficYielding scene described in this section.

  1. NONE - state in which it is only checked if a vehicle is approaching the intersection. If yes, a transition to state ENTERING_INTERSECTION is made.

  2. ENTERING_INTERSECTION - state in which it is checked if any of the situations LANES_RULES_ENTERING_INTERSECTION, LEFT_HAND_RULE_ENTERING_INTERSECTION, INTERSECTION_BLOCKED occur, if yes the state of the vehicle is changed to one matching the situation - to determine if the vehicle must yield priority. If none of these situations occur only the entry into the intersection will result in a transition to AT_INTERSECTION.

  3. AT_INTERSECTION - state in which it is checked if any of the situations LANES_RULES_AT_INTERSECTION, LEFT_HAND_RULE_AT_INTERSECTION, FORCING_PRIORITY occur, if yes the state of the vehicle is changed to one matching the situation - to determine if the vehicle must yield priority. If none of these situations occur only leaving the intersection will result in a transition to NONE.

  4. INTERSECTION_BLOCKED - when vehicle A is approaching the intersection, it yields priority to vehicle B, which should yield priority, but is forcing it - this refers to a situation in which vehicle B has entered the intersection and has already passed its stop point vehicle B isn\u2019t going to stop but has to leave the intersection. Until now, vehicle A has continued to pass through the intersection without taking vehicle B into account, now it is checking if any vehicle is forcing priority (vehicle A has INTERSECTION_BLOCKED state). (vehicle A is red car with blue sphere, B is the white car to which it points)

  5. LEFT_HAND_RULE_ENTERING_INTERSECTION - vehicle A, before entering the intersection where the traffic lights are off, yields priority to vehicles (ex. B) that are approaching to the intersection and are on the left side of vehicle A. Until now, situations in which the lights are off were not handled. If a vehicle didn't have a red light and was going straight - it just entered the intersection. Now vehicle A checks if the vehicles on the left (ex. B) have a red light, if not it yields them priority. (vehicle A is truck car with gray sphere, B is the white car to which it points)

  6. LEFT_HAND_RULE_AT_INTERSECTION - when vehicle A is already at the intersection, yields priority to vehicles (ex. B) that are also at the intersection and are on its left side - in cases where no other yielding rules are resolved between them (i.e. there are no RightOfWayLanes between them). (vehicle A is red, B is white)

  7. LANES_RULES_ENTERING_INTERSECTION - when vehicle B intends to turn left and is approaching at the intersection where it needs to yield to vehicle A which is going straight ahead, then it goes to state LANES_RULES_ENTERING_INTERSECTION. The introduced changes take into account that a vehicle approaching the intersection considers not only the vehicles at the intersection but also those which are approaching it (at a distance of less than minimumDistanceToIntersection to the intersection). (vehicle B is truck with yellow sphere, A the white car to which it points)

  8. LANES_RULES_AT_INTERSECTION - when vehicle B intends to turn right and is already at the intersection where it needs to yield to vehicle A which is approaching the intersection, then it goes to state LANES_RULES_AT_INTERSECTION. The introduced changes take into account that a vehicle approaching the intersection considers not only the vehicles at the intersection but also those which are approaching it (at a distance of less than minimumDistanceToIntersection to the intersection). (vehicle B is car with red sphere, A the white car to which it points)

  9. FORCING_PRIORITY - state in which some vehicle B should yield priority to a vehicle A but doesn't - for some reason, most likely it could be some unusual situation in which all other rules have failed. Then vehicle A which is at intersection yields priority to a vehicle that is forcing priority. In such a situation, vehicle A transitions to state FORCING_PRIORITY. It is very rare to achieve this state, but it does happen.

"},{"location":"Components/Traffic/RandomTraffic/YieldingRules/#gizmos-markings","title":"Gizmos Markings","text":""},{"location":"Components/Traffic/TrafficComponents/","title":"Traffic Components","text":"

This section

This section is still under development!

This is a section that describes in detail all components related to simulated traffic in the Environment prefab.

"},{"location":"Components/Traffic/TrafficComponents/#architecture","title":"Architecture","text":"

The random traffic system consists of the following components:

The process of spawning a NPCVehicle and its later behavior control is presented on the following sequence diagram.

Sequence Diagram Composition

Please note that the diagram composition has been simplified to the level of GameObjects and chosen elements of the GameObjects for the purpose of improving readability.

"},{"location":"Components/Traffic/TrafficComponents/#lanelet2","title":"Lanelet2","text":"

Lanelet2 is a library created for handling a map focused on automated driving. It also supports ROS and ROS2 natively. In AWSIM Lanelet2 is used for reading and handling a map of all roads. Specifically it does contain all TrafficLanes and StopLines. You may also see us referring to the actual map data file (*.osm) as a Lanelet2.

Lanelet2 official page

If you want to learn more we encourage to visit the official project page.

"},{"location":"Components/Traffic/TrafficComponents/#randomtrafficsimulator","title":"RandomTrafficSimulator","text":"

Nomenclature

Please note that

are named RandomTrafficSimulator. Keep this in mind when reading the following page - so you don't get confused.

RandomTrafficSimulator simulates traffic with respect to all traffic rules. The system allows for random selection of car models and the paths they follow. It also allows adding static vehicles in the simulation.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene","title":"Link in the default Scene","text":"

The RandomTrafficSimulator consists of several GameObjects.

"},{"location":"Components/Traffic/TrafficComponents/#components","title":"Components","text":"

RandomTrafficSimulator only has one component: Traffic Manager (script) which is described below.

"},{"location":"Components/Traffic/TrafficComponents/#trafficmanager-script","title":"TrafficManager (script)","text":"

Traffic Manager (script) is responsible for all of top level management of the NPCVehicles. It managed spawning of NPCVehicles on TrafficLanes.

TrafficManager uses the concept of TrafficSimulators. One TrafficSimulator is responsible for managing its set of NPCVehicles. Every TrafficSimulator spawns its own NPCVehicles independently. The vehicles spawned by one TrafficSimulator do respect its configuration. TrafficSimulators can be interpreted as NPCVehicle spawners with different configurations each. Many different TrafficSimulators can be added to the TrafficManager.

If a random mode is selected (RandomTrafficSimulator) then NPCVehicles will spawn in random places (from the selected list) and drive in random directions. To be able to reproduce the behavior of the RandomTrafficSimulator a Seed can be specified - which is used for the pseudo-random numbers generation.

TrafficManager script also configures all of the spawned NPCVehicles, so that they all have common parameters

The Vehicle Layer Mask and Ground Layer Mask are used to make sure all vehicles can correctly interact with the ground to guarantee simulation accuracy.

Max Vehicle Count specifies how many NPCVehicles can be present on the scene at once. When the number of NPCVehicles on the scene is equal to this value the RandomTrafficSimulator stops spawning new vehicles until some existing vehicles drive away and disappear.

The EgoVehicle field provides the information about Ego vehicle used for correct behavior ofNPCVehicleswhen interacting with Ego.

Show Gizmos checkbox specifies whether the Gizmos visualization should be displayed when running the simulation.

Show Yielding Phase checkbox specifies whether yielding phases should be displayed by Gizmos - in the form of spheres above vehicles, details in the Markings section.

Show Obstacle Checking checkbox specifies whether obstacle checking should be displayed by Gizmos - in the form of boxes in front of vehicles

Show Spawn Points checkbox specifies whether spawn points should be displayed by Gizmos - in the form of flat cuboids on roads.

Gizmos performance

Gizmos have a high computational load. Enabling them may cause the simulation to lag.

As mentioned earlier - TrafficManager may contain multiple TrafficSimulators. The two available variants of TrafficSimulator are described below

TrafficSimulators should be interpreted as spawning configurations for some group of NPCVehicles on the scene.

"},{"location":"Components/Traffic/TrafficComponents/#random-traffic","title":"Random Traffic","text":"

When using RandomTrafficSimulator the NPCVehicle prefabs (NPC Prefabs) can be chosen as well as Spawnable Lanes. The later are the only TrafficLanes on which the NPCVehicles can spawn. Upon spawning one of the Spawnabe Lanes is chosen and - given the vehicle limits are not reached - one random NPCVehicle from the Npc prefabs list is spawned on that lane. After spawning, the NPCVehicle takes a random route until it drives out of the map - then it is destroyed.

The Maximum Spawns field specifies how many Vehicles should be spawned before this TrafficSimulator stops working. Set to 0 to disable this restriction.

"},{"location":"Components/Traffic/TrafficComponents/#route-traffic","title":"Route Traffic","text":"

When using Route traffic Simulator the NPCVehicle prefabs (NPC Prefabs) as well as Route can be chosen. The later is an ordered list of TrafficLanes that all spawned vehicles will drive on. Given the vehicle limit is not reached - the RouteTrafficSimulator will spawn one of the Npc Prefabs chosen randomly on the first Route element (Element 0). After the first vehicle drives off the next one will spawn according to the configuration. It is important for all Route elements to be connected and to be arranged in order of appearance on the map. The NPCVehicle disappears after completing the Route.

The Maximum Spawns field specifies how many Vehicles should be spawned before this TrafficSimulator stops working. Set to 0 to disable this restriction.

"},{"location":"Components/Traffic/TrafficComponents/#parameter-explanation","title":"Parameter explanation","text":"Parameter Description General Settings Seed Seed value for random generator Ego Vehicle Transform of ego vehicle Vehicle Layer Mask LayerMask that masks only vehicle(NPC and ego) colliders Ground Layer Mask LayerMask that masks only ground colliders of the map Culling Distance Distance at which NPCs are culled relative to EgoVehicle Culling Hz Culling operation cycle NPCVehicle Settings Max Vehicle Count Maximum number of NPC vehicles to be spawned in simulation NPC Prefabs Prefabs representing controlled vehicles. They must have NPCVehicle component attached. Spawnable Lanes TrafficLane components where NPC vehicles can be spawned during traffic simulation Vehicle Config Parameters for NPC vehicle controlSudden Deceleration is a deceleration related to emergency braking Debug Show Gizmos Enable the checkbox to show editor gizmos that visualize behaviours of NPCs"},{"location":"Components/Traffic/TrafficComponents/#traffic-light-script","title":"Traffic Light (script)","text":"

Traffic Light (script) is a component added to every TrafficLight on the scene. It is responsible for configuring the TrafficLight behavior - the bulbs and their colors.

The Renderer filed points to the renderer that should be configured - in this case it is always a TrafficLight renderer.

Bulbs Emission Config is a list describing available colors for this Traffic Light. Every element of this list configures the following

The Bulb Material Config is a list of available bulbs in a given Traffic Light. Every element describes a different bulb. Every bulb has the following aspects configured

"},{"location":"Components/Traffic/TrafficComponents/#trafficintersections","title":"TrafficIntersections","text":"

TrafficIntersection is a representation of a road intersection. It consists of several components. TrafficIntersection is used in the Scene for managing TrafficLights. All Traffic Lights present on one Traffic Intersection must be synchronized - this is why the logic of TrafficLight operation is included in the TrafficIntersection.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene_1","title":"Link in the default Scene","text":"

Every TrafficIntersection has its own GameObject and is added as a child of the aggregate TrafficIntersections Object. TrafficIntersections are elements of an Environment, so they should be placed as children of an appropriate Environment Object.

"},{"location":"Components/Traffic/TrafficComponents/#components_1","title":"Components","text":"

TrafficIntersection has the following components:

"},{"location":"Components/Traffic/TrafficComponents/#collider","title":"Collider","text":"

Every TrafficIntersection contains a Box Collider element. It needs to accurately cover the whole area of the TrafficIntersection. Box Collider - together with the Traffic Intersection (script) - is used for detecting vehicles entering the TrafficIntersection.

"},{"location":"Components/Traffic/TrafficComponents/#traffic-intersection-script","title":"Traffic Intersection (script)","text":"

Traffic Intersection (script) is used for controlling all TrafficLights on a given intersection. The Collider Mask field is a mask on which all Vehicle Colliders are present. It - together with Box Collider - is used for keeping track of how many Vehicles are currently present on the Traffic Intersection. The Traffic Light Groups and Lighting Sequences are described below.

"},{"location":"Components/Traffic/TrafficComponents/#traffic-light-groups","title":"Traffic Light Groups","text":"

Traffic Light Group is a collection of all Traffic Lights that are in the same state at all times. This includes all redundant Traffic Lights shining in one direction as well as the ones in the opposite direction. In other words - as long as two Traffic Lights indicate exactly the same thing they should be added to the same Traffic Light Group. This grouping simplifies the creation of Lighting Sequences.

"},{"location":"Components/Traffic/TrafficComponents/#lighting-sequences","title":"Lighting Sequences","text":"

Lighting Sequences is the field in which the whole intersection Traffic Lights logic is defined. It consists of many different Elements. Each Element is a collection of Orders that should take an effect for the period of time specified in the Interval Sec field. Lighting Sequences Elements are executed sequentially, in order of definition and looped - after the last element sequence goes back to the first element.

The Group Lighting Orders field defines which Traffic Light Groups should change their state and how. For every Group Lighting Orders Element the Traffic Lights Group is specified with the exact description of the goal state for all Traffic Lights in that group - which bulb should light up and with what color.

One Lighting Sequences Element has many Group Lighting Orders, which means that for one period of time many different orders can be given. E.g. when Traffic Lights in one direction change color to green - Traffic Lights in the parallel direction change color to red.

Traffic Light state persistance

If in the given Lighting Sequences Element no order is given to some Traffic Light Group - this Group will keep its current state. When the next Lighting Sequences Element activates - the given Traffic Light Group will remain in an unchanged state.

Lighting Sequence Sample - details

Description Editor Traffic Lights in Pedestrian Group 1change color to flashing green. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Pedestrian Group 1change color to solid red. Other Groups keep theircurrent state. This state lasts for 1 second. Traffic Lights in Vehicle Group 1change color to solid yellow. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Vehicle Group 1change color to solid red. Other Groups keep theircurrent state. This state lasts for 3 seconds. Traffic Lights in Vehicle Group 2change color to solid green. Traffic Lights in Pedestrian Group 2change color to solid green. Other Groups keep theircurrent state. This state lasts for 15 seconds. Traffic Lights in Pedestrian Group 2change color to flashing green. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Pedestrian Group 2change color to solid red. Other Groups keep theircurrent state. This state lasts for 1 second. Traffic Lights in Vehicle Group 2change color to solid yellow. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Vehicle Group 2change color to solid red. Other Groups keep theircurrent state. This state lasts for 3 second. Sequence loops back to thefirst element of the list."},{"location":"Components/Traffic/TrafficComponents/#trafficlanes","title":"TrafficLanes","text":"

TrafficLane is a representation of a short road segment. It consists of several waypoints that are connected by straight lines. TrafficLanes are used as a base for a RandomTrafficSimulator. They allow NPCVehicles to drive on the specific lanes on the road and perform different maneuvers with respect to the traffic rules. TrafficLanes create a network of drivable roads when connected.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene_2","title":"Link in the default Scene","text":"

Every TrafficLane has its own GameObject and is added as a child of the aggregate TrafficLanes Object. TrafficLanes are an element of an Environment, so they should be placed as children of an appropriate Environment Object.

TrafficLanes can be imported from the lanelet2 *.osm file.

"},{"location":"Components/Traffic/TrafficComponents/#components_2","title":"Components","text":"

TrafficLane consists of an Object containing Traffic Lane (script).

TrafficLane has a transformation property - as every Object in Unity - however it is not used in any way. All details are configured in the Traffic Lane (script), the information in Object transformation is ignored.

"},{"location":"Components/Traffic/TrafficComponents/#traffic-lane-script","title":"Traffic Lane (script)","text":"

Traffic Lane (script) defines the TrafficLane structure. The Waypoints field is an ordered list of points that - when connected with straight lines - create a TrafficLane.

Traffic Lane (script) coordinate system

Waypoints are defined in the Environment coordinate system, the transformation of GameObject is ignored.

Turn Direction field contains information on what is the direction of this TrafficLane - whether it is a right or left turn or straight road.

Traffic lanes are connected using Next Lanes and Prev Lanes fields. This way individual TrafficLanes can create a connected road network. One Traffic Lane can have many Next Lanes and Prev Lanes. This represents the situation of multiple lanes connecting to one or one lane splitting into many - e.g. the possibility to turn and to drive straight.

Right Of Way Lanes are described below.

Every TrafficLane has to have a Stop Line field configured when the Stop Line is present on the end of the TrafficLane. Additionally the Speed Limit field contains the highest allowed speed on given TrafficLane.

"},{"location":"Components/Traffic/TrafficComponents/#right-of-way-lanes","title":"Right Of Way Lanes","text":"

Right Of Way Lanes is a collection of TrafficLanes. Vehicle moving on the given TrafficLane has to give way to all vehicles moving on every Right Of Way Lane. It is determined based on basic traffic rules. Setting Right Of Way Lanes allows RandomTrafficSimulator to manage all NPCVehicles so they follow traffic rules and drive safely.

In the Unity editor - when a TrafficLane is selected - aside from the selected TrafficLane highlighted in blue, all Right Of Way Lanes are highlighted in yellow.

Right Of Way Lanes Sample - details

The selected TrafficLane (blue) is a right turn on an intersection. This means, that before turning right the vehicle must give way to all vehicles driving from ahead - the ones driving straight as well as the ones turning left. This can be observed as TrafficLanes highlighted in yellow.

"},{"location":"Components/Traffic/TrafficComponents/#stoplines","title":"StopLines","text":"

StopLine is a representation of a place on the road where vehicles giving way to other vehicles should stop and wait. They allow RandomTrafficSimulator to manage NPCVehicles in safe and correct way - according to the traffic rules. All possible locations where a vehicle can stop in order to give way to other vehicles - that are enforced by an infrastructure, this does not include regular lane changing - need to be marked with StopLines.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene_3","title":"Link in the default Scene","text":"

Every StopLine has its own GameObject and is added as a child of the aggregate StopLines Object. Stop Lines are an element of an Environment, so they should be placed as children of an appropriate Environment Object.

StopLines can be imported from the lanelet2 *.osm file.

"},{"location":"Components/Traffic/TrafficComponents/#components_3","title":"Components","text":"

StopLine consists of an Object containing Stop Line (script).

Stop Line has a transformation property - as every Object in Unity - however it is not used in any way. All details are configured in the Traffic Lane (script), the information in Object transformation is ignored.

"},{"location":"Components/Traffic/TrafficComponents/#stop-line-script","title":"Stop Line (script)","text":"

Stop Line (script) defines StopLine configuration. The Points field is an ordered list of points that - when connected - create a StopLine. The list of points should always have two elements that create a straight StopLine.

Stop Line (script) coordinate system

Points are defined in the Environment coordinate system, the transformation of GameObject is ignored.

The Has Stop Sign field contains information whether the configured StopLine has a corresponding StopSign on the scene.

Every Stop Line needs to have a Traffic Light field configured with the corresponding Traffic Light. This information allows the RandomTrafficSimulator to manage the NPCVehicles in such a way that they respect the Traffic Lights and behave on the Traffic Intersections correctly.

"},{"location":"Components/Traffic/TrafficComponents/#gizmos","title":"Gizmos","text":"

Gizmos are a in-simulation visualization showing current and future moves of the NPCVehicles. They are useful for checking current behavior of NPCs and its causes. On the Scene they are visible as cuboid contours indicating which TrafficLanes will be taken by each vehicle in the near future.

Gizmos computing

Gizmos have a high computational load. Please disable them if the simulation is laggy.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/","title":"Add Vehicle","text":"

Ego Vehicle Component

In this tutorial we will create a new EgoVehicle. To learn more about what an EgoVehicle is in AWSIM please visit Ego Vehicle description page.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#cerate-an-object","title":"Cerate an Object","text":"

Add a child Object to the Simulation called EgoVehicle.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-rigidbody","title":"Add a Rigidbody","text":"
  1. While having a newly created EgoVehicle Object selected, in the Inspector view click on the 'Add Component' button, search for Rigidbody and select it.

  2. Configure Mass and Drag with the correct values for your Vehicle.

  3. Configure Interpolation and Collision Detection.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-visual-elements","title":"Add visual elements","text":"

For a detailed explanation hwo to add visual elements of your Vehicle check out this dedicated tutorial.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-canter-of-mass","title":"Add a Canter of Mass","text":"

To add a center of mass to your vehicle you have to add a CoM child Object to the EgoVehicle Object (the same as in steps before).

Then just set the position of the CoM Object in the Inspector view to represent real-world center of mass of the Vehicle.

How do I know what is the Center of Mass of my Vehicle

The best way is to obtain a Center of Mass information from your Vehicle documentation.

However, if this is not possible, you can try to estimate the Center of Mass of your vehicle. Best practice is to set the estimated Center of Mass as the following

Note: This will vary very much depending on your Vehicle construction. For the best possible result please follow the Vehicle specifications.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-reflection-probe","title":"Add a Reflection Probe","text":"
  1. Add a new Object called Reflection Probe as a child to the EgoVehicle Object.

  2. Click on the 'Add Component' button, in the windows that pops-up search for Reflection Probe and select it.

    !!!note Please note that with Reflection Probe there should also be automatically added a `HD Additional Reflection Data Script.

      ![reflection probe additional script](reflection_probe_additional_script.png)\n
  3. Configure the Reflection Probe as you wish.

    !!! example \"Example Configuration\" Below you can see an example configuration of the Reflection Probe.

      ![reflection probe configuration](reflection_probe_configuration.png)\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-colliders","title":"Add Colliders","text":"

For a detailed explanation how to add colliders to your Vehicle check out this dedicated tutorial.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-base-for-sensors-urdf","title":"Add a base for sensors (URDF)","text":"

You will most certainly want to add some sensors to your EgoVehicle. First you need to create a parent Object for all those sensors called URDF. To do this we will add a child Object URDF to the EgoVehicle Object.

This Object will be used as a base for all sensors we will add later.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-script","title":"Add a Vehicle Script","text":"

To be able to control your EgoVehicle you need a Vehicle Script.

  1. Add the Vehicle Script to the EgoVehicle Object.

  2. Configure the Vehicle Script Axle Settings and Center Of Mass Transform.

Testing

It is not possible to test this Script alone, but you can test the following

If components listed above work correctly this means the Vehicle Script works correctly too.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-keyboard-input-script","title":"Add a Vehicle Keyboard Input Script","text":"

You can control your EgoVehicle in the simulation manually with just one Script called Vehicle Keyboard Input.

If you want to add it just click the 'Add Component' button on the EgoVehicle Object and search for Vehicle Keyboard Input Script and select it.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-visual-effect-script","title":"Add a Vehicle Visual Effect Script","text":"

For a visual indication of a Vehicle status you will need a Vehicle Visual Effect Script. To add and configure it follow the steps below.

  1. Add a Vehicle Visual Effect Script by clicking 'Add Component' button, searching for it and selecting it.

  2. Configure the lights.

    !!!note In this step we will configure only Brake Lights, but should repeat this for every Light. The process is almost the same for all Lights - just change the mesh renderer and lighting settings according to your preference.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#how-to-test","title":"How to test","text":"

After configuring Vehicle Visual Effect Script it is advised to test whether everything works as expected.

  1. Make sure you have a Vehicle Keyboard Input Script added and that it is enabled.

  2. If your scene does not have any models yet please turn the gravity off in Rigidbody configuration so that the Vehicle does not fall down into infinity.

  3. Start the simulation.

  4. Test the Turn Signals.

    You can control the Turn Signals with a Vehicle Keyboard Input Script. Activate the Turn Signals with one of the following keys

    - 1 - Left Turn Signal - 2 - Right Turn Signal - 3 - Hazard Lights - 4 - Turn Off all Signals

  5. Test the Lights.

    You can control the lights by \"driving\" the Vehicle using Vehicle Keyboard Input Script. Although if you have an empty Environment like in this tutorial the Vehicle won't actually drive.

    To test Brake Lights change the gear to Drive by pressing D on the keyboard and activate braking by holding arrow down.

    To test the Reverse Light change the gear to Reverse by pressing R on the keyboard. The Reverse Light should turn on right away.

Camera tip

If you have not configured a camera or configured it in such a way that you can't see the Vehicle well you can still test most of the lights by changing views.

Pleas note that this method won't work for testing Brake Lights, as for them to work you need to keep the arrow down button pressed all the time.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-ros-input-script","title":"Add a Vehicle Ros Input Script","text":"

For controlling your Vehicle with autonomous driving software (e.g. Autoware) you need a Vehicle Ros Input Script.

Disable Vehicle Keyboard Input Script

If you have added a Vehicle Keyboard Input Script in your Vehicle please disable it when using the Vehicle Ros Input Script.

Not doing so will lead to the vehicle receiving two different inputs which will cause many problems.

Add it to the EgoVehicle Object by clicking on the 'Add Component' button, searching for it and selecting it.

The Script is configured to work with Autoware by default, but you can change the topics and Quality of Service settings as you wish.

Note

The Vehicle should be configured correctly, but if you have many Vehicles or something goes wrong, please select the right Vehicle in the Vehicle field by clicking on the small arrow icon and choosing the right item from the list.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#how-to-test_1","title":"How to test","text":"

The best way to test the Vehicle Ros Input Script is to run Autoware.

  1. Run the Scene same as on this page.
  2. Launch only the Autoware like on this page
  3. Plan a path in Autoware like here, if the Vehicle moves in AWSIM correctly then the Script is configured well.
"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-sensors","title":"Add Sensors","text":"

For a detailed explanation how to add sensors to your Vehicle check out this dedicated tutorial.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-to-scene","title":"Add a Vehicle to Scene","text":"

First you will have to save the Vehicle you created as a prefab, to easily add it later to different Scenes.

  1. Open the Vehicles directory in the Project view (Assets/AWSIM/Prefabs/Vehicles)
  2. Drag the Vehicle Object from the Hierarchy view to the Vehicles directory

After that, you can add the Vehicle you created to different Scenes by dragging it from Vehicles directory to the Hierarchy of different Scenes.

"},{"location":"Components/Vehicle/AddNewVehicle/AddColliders/","title":"Add Colliders","text":"

Next you need to add Colliders to your Vehicle. To do this follow the steps below.

  1. Add a child Object called Colliders to the EgoVehicle Object.

  2. Shift parent Object Colliders accordingly as in earlier steps where we shifted Models.

"},{"location":"Components/Vehicle/AddNewVehicle/AddColliders/#add-a-vehicle-collider","title":"Add a Vehicle Collider","text":"
  1. Add a child Object Collider to the Colliders Object.

  2. Add a Mesh Collider component to the Collider Object by clicking on the 'Add Component' button in the Inspector view and searching for it.

  3. Click on the arrow in mesh selection field and from the pop-up window select your collider mesh. Next click on the check-box called Convex, by now your collider mesh should be visible in the editor.

"},{"location":"Components/Vehicle/AddNewVehicle/AddColliders/#add-wheel-colliders","title":"Add Wheel Colliders","text":"
  1. Add a child Object Wheels to the Colliders Object.

Note

In this tutorial we will add only one wheel collider, but you should repeat the step for all 4 wheels. That is, follow the instructions that follow this message for every wheel your Vehicle has.

  1. Add a child Object FrontLeftWheel to the Wheels Object.

  2. Add a Wheel Collider component to the FrontLeftWheel Object by clicking 'Add Component' and searching for it.

  3. Add a Wheel Script to the FrontLeftWheel Object by clicking 'Add Component' and searching for it.

  4. Drag FrontLeftWheel Object from the WheelVisuals to the Wheel Visual Transform field.

  5. Add a Wheel Collider Config Script to the FrontLeftWheel Object by clicking 'Add Component' and searching for it.

  6. Configure the Wheel Collider Config Script so that the Vehicle behaves as you wish.

  7. Set the Transform of FrontLeftWheel Object to match the visuals of your Vehicle.

Successful configuration

If you have done everything right your Colliders Object should look similar to the one following.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/","title":"Add Sensors","text":""},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#awsim-sensors","title":"AWSIM Sensors","text":"

There is a number of different sensors available in AWSIM. Below we present a list of sensors with links to their individual pages.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-links-for-sensors","title":"Add links for sensors","text":"

Best practice is to replicate a ROS sensors transformations tree in Unity using Objects.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#coordinate-system-conversion","title":"Coordinate system conversion","text":"

Please note that Unity uses less common left-handed coordinate system. Please keep this in mind while defining transformations. More details about right-handed and left-handed systems can be found here.

To simplify the conversion process always remember that any point in ROS coordinate system (x, y, z) has an equivalent in the Unity coordinate system being (-y, z, x).

The same can be done with the rotation. ROS orientation described with roll, pitch and yaw (r, p, y) can be translated to Unity Rotation as follows (p, -y, -r).

Unit conversion

Please remember to convert the rotation units. ROS uses radians and Unity uses degrees. The conversion from radians (rad) to degrees (deg) is as follows.

deg = rad * 180 / PI\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-transformations-tree","title":"Add transformations tree","text":"

URDF

Before following this tutorial please make sure you have an URDF Object like it is shown shown in this section.

First we will have to add a base_link which is the root of all transformations.

Add a base_link Object as a child to the URDF Object.

base_link transformation

Please remember to set an appropriate transformation of the base_link Object so that it is identical as the base_link used in ROS in reference to the Vehicle.

This is very important, as a mistake here will result in all subsequent sensors being misplaced.

Inside the base_link we will represent all transformations contained in the ROS transformations tree.

You will have to check your Vehicle specific configuration. You can do this in many ways, for example:

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-one-sensor-link","title":"Add one sensor link","text":"

Note

In this step we will only add one sensor link. You will have to repeat this step for every sensor you want to add to your Vehicle.

Let's say we want to add a LiDAR that is facing right.

We have the following configuration files.

base_link:\n    sensor_kit_base_link:\n        x: 0.9\n        y: 0.0\n        z: 2.0\n        roll: -0.001\n        pitch: 0.015\n        yaw: -0.0364\n
sensor_kit_base_link:\n    velodyne_right_base_link:\n        x: 0.0\n        y: -0.56362\n        z: -0.30555\n        roll: -0.01\n        pitch: 0.71\n        yaw: -1.580\n

We can clearly see the structure of transformation tree. The transformations are as follows.

base_link -> sensor_kit_base_link -> velodyne_right_base_link\n

We need to start adding these transformation from the root of the tree. We will start with the sensor_kit_base_link, as the base_link already exists in our tree.

  1. The first step is to add an Object named the same as the transformation frame (sensor_kit_base_link).

  2. Next we have to convert the transformation from ROS standard to the Unity standard. This is done with the formulas show in this section.

    The result of conversion of the coordinate systems and units is shown below.

    Position:\n(0.9, 0.0, 2.0)             ->  (0.0, 2.0, 0.9)\nRotation:\n(-0.001, 0.015, -0.0364)    ->  (0.8594, 2.0856, 0.0573)\n

    The resulting sensor_kit_base_link Object transformation is shown below.

Now the same has to be done with the velodyne_right_base_link.

  1. Add transformation Object (velodyne_right_base_link).

    !!!info Remember to correctly set the child Object, in this case we use sensor_kit_base_link as a child, because this is what the .yaml file says.

  2. Convert the transformation into Unity coordinate system.

    The correct transformation is shown below.

    Position:\n(0, -0.56362, -0.30555)     ->  (0.56362, -0.30555, 0)\nRotation:\n(-0.01, 0.71, -1.580)       ->  (40.68, 90.5273, 0.573)\n

    The final velodyne_right_base_link Object transformation is shown below.

Success

If you have done everything right, after adding all of the sensor links your URDF Object tree should look something like the one following.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-sensors","title":"Add sensors","text":"

After adding links for all sensors you need to add the actual sensors into your Vehicle.

Sensor position

Please keep in mind, that we have created the sensor links in order to have an accurate transformations for all of the sensors. This implies that the Sensor Object itself can not have any transformation.

If one of your Sensors, after adding it to the scene, is mispositioned, check whether the transformation is set to identity (position and rotation are zeros).

When adding sensors almost all of them will have some common fields.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-vehicle-status-sensor","title":"Add a Vehicle Status Sensor","text":"

To add a Vehicle Status Sensor to your Vehicle simply locate the following directory in the Project view and drag a prefab of this Sensor into the URDF Object.

Assets/AWSIM/Prefabs/Sensors\n

Next in the Inspector View select your Vehicle.

ROS message example

In this example you can see what a valid message from the Vehicle Status Sensor can look like.

$ ros2 topic echo --once /vehicle/status/velocity_status\nheader:\n  stamp:\n    sec: 17\n    nanosec: 709999604\n  frame_id: base_link\nlongitudinal_velocity: 0.004912620410323143\nlateral_velocity: -0.005416259169578552\nheading_rate: 0.006338323466479778\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-lidar","title":"Add a LiDAR","text":"

Scene Manager

Before continuing with this tutorial please check out a dedicated one focused on Scene Manager.

To add a LiDAR to your Vehicle you will have to drag a model of the LiDAR to the link tree you have created in the earlier step.

You can use the predefined RGL LiDAR models or any other LiDAR models. In this tutorial we will be using RGL VelodyneVLP16 LiDAR model.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors/RobotecGPULidars\n

LiDAR noise configuration

LiDAR Sensor in simulation is returning a perfect result data. This is not an accurate representation of the real-world.

LiDAR Sensor addresses this issue by applying a simulated noise to the output data. You can configure the noise parameters in the Inspector View under Configuration -> Noise Params fields.

You can optionally remove the noise simulation by unchecking the Apply Distance/Angular Gaussian Noise.

You can also change the ranges of the LiDAR detection.

There is also a possibility to configure the visualization of the Point Cloud generated by the LiDAR. E.g. change the hit-point shape and size.

ROS message example

In this example you can see what a valid message from the LiDAR Sensor can look like.

$ ros2 topic echo --once /lidar/pointcloud\nheader:\n  stamp:\n    sec: 20\n    nanosec: 589999539\n  frame_id: world\nheight: 1\nwidth: 14603\nfields:\n- name: x\n  offset: 0\n  datatype: 7\n  count: 1\n- name: y\n  offset: 4\n  datatype: 7\n  count: 1\n- name: z\n  offset: 8\n  datatype: 7\n  count: 1\n- name: intensity\n  offset: 16\n  datatype: 7\n  count: 1\n- name: ring\n  offset: 20\n  datatype: 4\n  count: 1\nis_bigendian: false\npoint_step: 24\nrow_step: 350472\ndata:\n- 156\n- 218\n- 183\n- 62\n- 0\n- 189\n- 167\n- 187\n- 32\n- 58\n- 173\n- 189\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 1\n- 0\n- 0\n- 0\n- 198\n- 129\n- 28\n- 63\n- 0\n- 6\n- 230\n- 58\n- 128\n- 184\n- 93\n- 61\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 9\n- 0\n- 0\n- 0\n- 92\n- 2\n- 194\n- 62\n- 0\n- 141\n- 42\n- 187\n- 128\n- 89\n- 139\n- 189\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 2\n- 0\n- 0\n- 0\n- 187\n- 168\n- 42\n- 63\n- 0\n- 159\n- 175\n- 59\n- 160\n- 243\n- 185\n- 61\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 10\n- 0\n- 0\n- 0\n- 119\n- 186\n- 204\n- 62\n- 0\n- 254\n- 23\n- 59\n- 128\n- 143\n- 41\n- 189\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 3\n- 0\n- 0\n- 0\n- 65\n- 241\n- 59\n- 63\n- 128\n- 0\n- 252\n- 187\n- '...'\nis_dense: true\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-an-imu","title":"Add an IMU","text":"

To add an IMU to your Vehicle you will have to drag a model of the IMU to the link tree you have created in the earlier step.

You can use the provided or your own IMU Sensor. In this tutorial we will be using IMU Sensor provided with AWSIM.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors\n

ROS message example

In this example you can see what a valid message from the IMU Sensor can look like.

$ ros2 topic echo --once /sensing/imu/tamagawa/imu_raw\nheader:\n  stamp:\n    sec: 20\n    nanosec: 589999539\n  frame_id: tamagawa/imu_link\norientation:\n  x: 0.0\n  y: 0.0\n  z: 0.0\n  w: 1.0\norientation_covariance:\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\nangular_velocity:\n  x: 0.014335081912577152\n  y: 0.008947336114943027\n  z: -0.008393825963139534\nangular_velocity_covariance:\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\nlinear_acceleration:\n  x: 0.006333829835057259\n  y: -0.005533283110707998\n  z: -0.0018753920448943973\nlinear_acceleration_covariance:\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-gnss","title":"Add a GNSS","text":"

To add a GNSS Sensor to your Vehicle you will have to drag a model of the GNSS to the link tree you have created in the earlier step.

You can use the provided or your own GNSS Sensor. In this tutorial we will be using GNSS Sensor provided with AWSIM.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors\n

ROS message example

In this example you can see what a valid message from the GNSS Sensor can look like.

$ ros2 topic echo --once /sensing/gnss/pose\nheader:\n  stamp:\n    sec: 8\n    nanosec: 989999799\n  frame_id: gnss_link\npose:\n  position:\n    x: 81656.765625\n    y: 50137.5859375\n    z: 44.60169219970703\n  orientation:\n    x: 0.0\n    y: 0.0\n    z: 0.0\n    w: 0.0\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-camera","title":"Add a Camera","text":"

To add a Camera Sensor to your Vehicle you will have to drag a model of the Camera to the link tree you have created in the earlier step.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors\n

You can configure some aspects of the Camera to your liking.

E.g. you can set the field of view (fov) of the camera by changing the Field of View field or manipulating the physical camera parameters like Focal Length.

The important thing is to configure the Camera Sensor Script correctly.

Always check whether the correct Camera Object is selected and make sure that Distortion Shader and Ros Image Shader are selected.

Example Camera Sensor Script configuration

You can add the live Camera preview onto the Scene. To do this select the Show checkbox. Additionally you can change how the preview is displayed. Change the Scale value to control the size of the preview (how many times smaller the preview will be compared to the actual screen size).

Move the preview on the screen by changing the X Axis and Y Axis values on the Image On Gui section.

Camera preview example

Testing camera with traffic light recognition

You can test the Camera Sensor traffic light recognition by positioning the vehicle on the Unity Scene in such a way that on the Camera preview you can see the traffic lights.

Remember to lock the Inspector view on Camera Object before dragging the whole Vehicle - this way you can see the preview while moving the vehicle.

Run the Scene the same as on this page.

Launch only the Autoware like on this page.

By default you should see the preview of traffic light recognition visualization in the bottom left corner of Autoware.

Traffic lights recognition example in Autoware

ROS message example

In this example you can see what a valid message from the Camera Sensor can look like.

$ ros2 topic echo --once /sensing/camera/traffic_light/image_raw\nheader:\n  stamp:\n    sec: 14\n    nanosec: 619999673\n  frame_id: traffic_light_left_camera/camera_optical_link\nheight: 1080\nwidth: 1920\nencoding: bgr8\nis_bigendian: 0\nstep: 5760\ndata:\n- 145\n- 126\n- 106\n- 145\n- 126\n- 106\n- 145\n- 126\n- 106\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 101\n- 145\n- 124\n- 101\n- 145\n- 124\n- 101\n- 145\n- 124\n- 101\n- 145\n- 123\n- 101\n- 145\n- 123\n- 101\n- 145\n- 123\n- 101\n- 145\n- 123\n- '...'\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-pose-sensor","title":"Add a Pose Sensor","text":"

To add a Pose Sensor to your Vehicle simply locate the following directory in the Project view and drag a prefab of this Sensor into the base_link Object.

Assets/AWSIM/Prefabs/Sensors\n

ROS message example

In this example you can see what a valid message from the Pose Sensor can look like.

$ ros2 topic echo --once /awsim/ground_truth/vehicle/pose\nheader:\n  stamp:\n    sec: 5\n    nanosec: 389999879\n  frame_id: base_link\npose:\n  position:\n    x: 81655.7578125\n    y: 50137.3515625\n    z: 42.8094367980957\n  orientation:\n    x: -0.03631274029612541\n    y: 0.0392342209815979\n    z: 0.02319677732884884\n    w: 0.9983005523681641\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#test-a-sensor","title":"Test a Sensor","text":"

You can test whether the Sensor works correctly in several ways.

  1. Check whether the configuration is correct.

    In terminal source ROS with the following line (only if you haven't done so already).

    source /opt/ros/humble/setup.bash\n

    Check the details about the topic that your Sensor is broadcasting to with the following command.

    ros2 topic info -v <topic_name>\n

    !!!example In this example we can see that the message is broadcasted by AWSIM and nobody is listening. We can also examine the Quality of Service settings.

      ```log\n  $ ros2 topic info -v /awsim/ground_truth/vehicle/pose\n  Type: geometry_msgs/msg/PoseStamped\n\n  Publisher count: 1\n\n  Node name: AWSIM\n  Node namespace: /\n  Topic type: geometry_msgs/msg/PoseStamped\n  Endpoint type: PUBLISHER\n  GID: 01.10.13.11.98.7a.b1.2a.ee.a3.5a.11.00.00.07.03.00.00.00.00.00.00.00.00\n  QoS profile:\n    Reliability: RELIABLE\n    History (Depth): KEEP_LAST (1)\n    Durability: VOLATILE\n    Lifespan: Infinite\n    Deadline: Infinite\n    Liveliness: AUTOMATIC\n    Liveliness lease duration: Infinite\n\n  Subscription count: 0\n\n  ```\n
  2. Check whether correct information is broadcasted.

    In terminal source ROS with the following line (only if you haven't done so already).

    source /opt/ros/humble/setup.bash\n

    View one transmitted message.

    ros2 topic echo --once <topic_name>\n

    !!!example In this example we can see the Vehicles location at the moment of executing the command.

      **NOTE:** The position and orientation are relative to the frame in the `header/frame_id` field (`base_link` in this example).\n\n  ```log\n  $ ros2 topic echo --once /awsim/ground_truth/vehicle/pose\n  header:\n    stamp:\n      sec: 46\n      nanosec: 959998950\n    frame_id: base_link\n  pose:\n    position:\n      x: 81655.7265625\n      y: 50137.4296875\n      z: 42.53997802734375\n    orientation:\n      x: 0.0\n      y: -9.313260163068549e-10\n      z: -6.36646204504876e-12\n      w: 1.0\n  ---\n  ```\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/","title":"Add Visual Elements","text":"

Your EgoVehicle needs many individual visual parts. Below we will add all needed visual elements.

First in EgoVehicle Object add a child Object called Models.

Inside Models Object we will add all visual models of our EgoVehicle.

"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#add-a-body","title":"Add a Body","text":"

First you will need to add a Body of your Vehicle. It will contain many parts, so first lets create a Body parent Object.

Next we will need to add Car Body

  1. Add a child Object BodyCar to the Body Object.

  2. To the BodyCar Object add a Mesh Filter.

    Click on the 'Add Component' button, search for Mesh Filter and select it. Next search for mesh of your vehicle and select it in the Mesh field.

  3. To the BodyCar Object add a Mesh Renderer.

    Click on the 'Add Component' button, search for Mesh Filter and select it

  4. Specify Materials.

    You need to specify what materials will be used for rendering your EgoVehicle model. Do this by adding elements to the Materials list and selecting the materials you wish to use as shown below.

    Add as many materials as your model has sub-meshes.

    !!!tip When you add too many materials, meaning there will be no sub-meshes to apply these materials to, you will see this warning. In such a case please remove materials until this warning disappears.

      ![mesh renderer too many materials](mesh_renderer_too_many_materials.png)\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#add-interactive-body-parts","title":"Add interactive Body parts","text":"

In this step we will add the following parts

Info

It may seem like all of the elements above can be parts of the Body mesh, but it is important for these parts to be separate, because we need to be able to make them interactive (e.g. flashing turn signals).

Other good reason for having different meshes for Vehicle parts is that you have a Vehicle model, but for the simulation you need to add e.g. a roof rack with sensors - which can be achieved by adding more meshes.

Note

We will illustrate this step only for Break Light, but you should repeat this step of the tutorial for each element of the list above.

  1. Add a child Object to the Body Object.

  2. Add a Mesh Filter and select the mesh (the same as in section before).

  3. Add a Mesh Renderer and select the materials (the same as in section before).

"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#add-wheels","title":"Add Wheels","text":"

In this step we will add individual visuals for every wheel. This process is very similar to the one before.

  1. Add a child Object to the Models Object called WheelVisuals.

Note

In this tutorial we will add only one wheel, but you should repeat the step for all 4 wheels. That is, follow the instructions that follow this message for every wheel your Vehicle has.

  1. Add a child Object to the WheelVisuals Object called FrontLeftWheel.

  2. Add a child Object to the FrontLeftWheel Object called WheelFrontL. This Object will contain the actual wheel part.

  3. Add a Mesh Filter and select the wheel mesh.

  4. Add a Mesh Renderer and select the wheel materials.

  5. Repeat the steps before to add Breaks.

    The same way you have added the WheelFrontL Object now add the WheelFrontLBreaks. Naturally you will have to adjust the mesh and materials used as they will be different for breaks than for the wheel.

    Your final break configuration should look similar to the one following.

  6. Set the FrontLeftWheel parent Object transformation to position the wheel in correct place.

Successful configuration

If you have done everything right your WheelVisuals Object should look similar to the one following.

"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#move-the-models","title":"Move the models","text":"

The last step to correctly configure Vehicle models is to shift them so that the EgoVehicle origin is in the center of fixed axis.

This means you need to shift the whole Models Object accordingly (change the position fields in transformation).

Tip

Add a dummy Object as a child to the EgoVehicle Object (the same as in steps before) so it is located in the origin of the EgoVehicle.

Now move Models around relative to the dummy - change position in the Inspector view. The dummy will help you see when the fixed axis (in case of the Lexus from example it is the rear axis) is aligned with origin of EgoVehicle.

In the end delete the dummy Object as it is no longer needed.

"},{"location":"Components/Vehicle/CustomizeSlip/","title":"Customize Slip","text":""},{"location":"Components/Vehicle/CustomizeSlip/#customize-slip","title":"Customize slip","text":"

By attaching a GroundSlipMutiplier.cs script to a collider (trigger), you can change the slip of the vehicle within the range of that collider.

"},{"location":"Components/Vehicle/CustomizeSlip/#sample-scene","title":"Sample scene","text":"

Assets\\AWSIM\\Scenes\\Samples\\VehicleSlipSample.unity

"},{"location":"Components/Vehicle/CustomizeSlip/#how-to-setup","title":"How to setup","text":"
  1. Create collider
  2. Check IsTrigger
  3. Change properties of FowardSlip and SidewaySlip.
"},{"location":"Components/Vehicle/EgoVehicle/","title":"Ego Vehicle","text":""},{"location":"Components/Vehicle/EgoVehicle/#introduction","title":"Introduction","text":"

EgoVehicle is a playable object that simulates a vehicle that can autonomously move around the scene. It has components (scripts) that make it possible to control it by keyboard or by Autoware (using ROS2 communication). Moreover, it provides sensory data needed for self-localization in space and detection of objects in the surrounding environment.

The default prefab EgoVehicle was developed using a Lexus RX450h 2015 vehicle model with a configured sample sensor kit.

Own EgoVehicle prefab

If you would like to develop your own EgoVehicle prefab, we encourage you to read this tutorial.

"},{"location":"Components/Vehicle/EgoVehicle/#supported-features","title":"Supported features","text":"

This vehicle model was created for Autoware simulation, and assuming that Autoware has already created a gas pedal map, this vehicle model uses acceleration as an input value. It has the following features:

AutowareSimulation

If you would like to see how EgoVehicle works or run some tests, we encourage you to familiarize yourself with the AutowareSimulation scene described in this section.

"},{"location":"Components/Vehicle/EgoVehicle/#lexus-rx450h-2015-parameters","title":"Lexus RX450h 2015 parameters","text":"Parameter Value Unit Mass \\(1500\\) \\(kg\\) Wheel base \\(2.5\\) \\(m\\) Tread width \\(Ft = 1.8; Rr = 1.8\\) \\(m\\) Center of Mass position \\(x = 0; y = 0.5; z = 0\\) \\(m\\) Moment of inertia \\(\\mathrm{yaw} = 2000; \\mathrm{roll} = 2000; \\mathrm{pitch} = 700\\) \\(kg \\cdot m^2\\) Spring rate \\(Ft = 55000; Rr = 48000\\) \\(N\\) Damper rate \\(Ft = 3000; Rr = 2500\\) \\(\\frac{N}{s}\\) Suspension stroke \\(Ft = 0.2; Rr = 0.2\\) \\(m\\) Wheel radius \\(0.365\\) \\(m\\)

Vehicle inertia

In general, measuring the moment of inertia is not easy, and past papers published by NHTSA are helpful. Measured Vehicle Inertial Parameters - NHTSA 1998

"},{"location":"Components/Vehicle/EgoVehicle/#prefab-and-fbx","title":"Prefab and Fbx","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/NPCs/Vehicles/Lexus RX450h 2015 Sample Sensor.prefab\n

EgoVehicle name

In order to standardize the documentation, the name EgoVehicle will be used in this section as the equivalent of the prefab named Lexus RX450h 2015 2015 Sample Sensor.

EgoVehicle prefab has the following content:

As you can see, it consists of 3 parents for GameObjects:

All objects are described in the sections below.

"},{"location":"Components/Vehicle/EgoVehicle/#visual-elements","title":"Visual elements","text":"

Prefab is developed using models available in the form of *.fbx files. The visuals elements have been loaded from the appropriate *.fbx file and are aggregated and added in object Models.

*.fbx file for Lexus RX450h 2015 is located under the following path:

Assets/AWSIM/Models/Vehicles/Lexus RX450h 2015.fbx\n

Models object has the following content:

As you can see, the additional visual element is XX1 Sensor Kit.

It was also loaded from the *.fbx file which can be found under the following path:

Assets/AWSIM/Models/Sensors/XX1 Sensor Kit.fbx\n

Lexus RX450h 2015.fbx

The content of a sample *.fbx file is presented below, all elements except Collider have been added to the prefab as visual elements of the vehicle. Collider is used as the Mesh source for the Mesh Collider in the BodyCollider object.

"},{"location":"Components/Vehicle/EgoVehicle/#link-in-the-default-scene","title":"Link in the default Scene","text":"

The default scene contains a single Lexus RX450h 2015 Sample Sensor prefab that is added as a child of the EgoVehicle GameObject.

In EgoVehicle prefab, the local coordinate system of the vehicle (main prefab link) should be defined in the axis of the rear wheels projected onto the ground - in the middle of the distance between them. This aspect holds significance when characterizing the dynamics of the object, as it provides convenience in terms of describing its motion and control.

"},{"location":"Components/Vehicle/EgoVehicle/#components","title":"Components","text":"

There are several components responsible for the full functionality of Vehicle:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/Vehicles/*\n
"},{"location":"Components/Vehicle/EgoVehicle/#architecture","title":"Architecture","text":"

The EgoVehicle architecture - with dependencies - is presented on the following diagram.

The communication between EgoVehicle components is presented on two different diagrams - a flow diagram and a sequence diagram.

The flow diagram presents a flow of information between the EgoVehicle components.

The sequence diagram provides a deeper insight in how the communication is structured and what are the steps taken by each component. Some tasks performed by the elements are presented for clarification.

Sequence diagram

Please keep in mind, that Autoware message callbacks and the update loop present on the sequence diagram are executed independently and concurrently. One thing they have in common are resources - the Vehicle (script).

"},{"location":"Components/Vehicle/EgoVehicle/#com","title":"CoM","text":"

CoM (Center of Mass) is an additional link that is defined to set the center of mass in the Rigidbody. The Vehicle (script) is responsible for its assignment. This measure should be defined in accordance with reality. Most often, the center of mass of the vehicle is located in its center, at the height of its wheel axis - as shown below.

"},{"location":"Components/Vehicle/EgoVehicle/#colliders","title":"Colliders","text":"

Colliders are used to ensure collision between objects. In EgoVehicle, the main Collider collider and colliders in Wheels GameObject for each wheel were added.

Colliders object has the following content:

"},{"location":"Components/Vehicle/EgoVehicle/#bodycollider","title":"BodyCollider","text":"

Collider is a vehicle object responsible for ensuring collision with other objects. Additionally, it can be used to detect these collisions. The MeshCollider takes a Mesh of object and builds its Collider based on it. The Mesh for the Collider was also loaded from the *.fbx file similarly to the visual elements.

"},{"location":"Components/Vehicle/EgoVehicle/#wheels-colliders","title":"Wheels Colliders","text":"

WheelsColliders are an essential elements from the point of view of driving vehicles on the road. They are the only ones that have contact with the roads and it is important that they are properly configured. Each vehicle, apart from the visual elements related to the wheels, should also have 4 colliders - one for each wheel.

Wheel (script) provides a reference to the collider and visual object for the particular wheel. Thanks to this, the Vehicle (script) has the ability to perform certain actions on each of the wheels, such as:

Wheel Collider Config (script) has been developed to prevent inspector entry for WheelCollider which ensures that friction is set to 0 and only wheel suspension and collisions are enabled.

Wheel Collider Config

For a better understanding of the meaning of WheelCollider we encourage you to read this manual.

"},{"location":"Components/Vehicle/EgoVehicle/#rigidbody","title":"Rigidbody","text":"

Rigidbody ensures that the object is controlled by the physics engine. The Mass of the vehicle should approximate its actual weight. In order for the vehicle to physically interact with other objects - react to collisions, Is Kinematic must be turned off. The Use Gravity should be turned on - to ensure the correct behavior of the body during movement. In addition, Interpolate should be turned on to ensure the physics engine's effects are smoothed out.

"},{"location":"Components/Vehicle/EgoVehicle/#reflection-probe","title":"Reflection Probe","text":"

Reflection Probe is added to EgoVehicle prefab to simulate realistic reflections in a scene. It is a component that captures and stores information about the surrounding environment and uses that information to generate accurate reflections on objects in real-time. The values in the component are set as default.

HD Additional Reflection Data (script) is additional component used to store settings for HDRP's reflection probes and is added automatically.

"},{"location":"Components/Vehicle/EgoVehicle/#urdf-and-sensors","title":"URDF and Sensors","text":"

URDF (Unified Robot Description Format) is equivalent to the simplified URDF format used in ROS2. This format allows to define the positions of all sensors of the vehicle in relation to its local coordinate system. URDF is built using multiple GameObjects as children appropriately transformed with relation to its parent.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-script","title":"Vehicle (script)","text":"

Vehicle (script) provides an inputs that allows the EgoVehicle to move. Script inputs provides the ability to set the acceleration of the vehicle and the steering angle of its wheels, taking into account the effects of suspension and gravity. It also provides an input to set the gear in the gearbox and to control the turn signals. Script inputs can be set by one of the following scripts: Vehicle Ros Input (script) or Vehicle Keyboard Input (script).

The script performs several steps periodically:

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":"

The script uses the CoM link reference to assign the center of mass of the vehicle to the Rigidbody. In addiction, Use inertia allows to define the inertia tensor for component Rigidbody - by default it is disabled.

Physics Settings - allows to set values used to control vehicle physics:

Axles Settings contains references to (Wheel (script)) scripts to control each wheel. Thanks to them, the Vehicle (script) is able to set their steering angle and accelerations.

Input Settings - allows to set limits for values on script input:

Inputs - are only used as a preview of the currently set values in the script input:

"},{"location":"Components/Vehicle/EgoVehicle/#input-data","title":"Input Data","text":"Category Type Description AccelerationInput float Acceleration input (m/s^2). On the plane, output the force that will result in this acceleration. On a slope, it is affected by the slope resistance, so it does not match the input. SteerAngleInput float Vehicle steering input (degree). Negative steers left, positive right AutomaticShiftInput enumeration Vehicle gear shift input (AT).Values: PARKING, REVERSE, NEUTRAL, DRIVE. SignalInput enumeration Vehicle turn signal input.Values: NONE, LEFT, RIGHT, HAZARD."},{"location":"Components/Vehicle/EgoVehicle/#output-data","title":"Output data","text":"Category Type Description LocalAcceleration Vector3 Acceleration(m/s^2) in the local coordinate system of the vehicle Speed float Vehicle speed (m/s). SteerAngle float Vehicle steering angle (degree). Signal enumeration Vehicle turn signal. Velocity Vector3 Vehicle velocity (m/s) LocalVelocity Vector3 Vehicle local velocity (m/s) AngularVelocity Vector3 Vehicle angular velocity (rad/s)

The acceleration or deceleration of the vehicle is determined by AutomaticShiftInput and AccelerationInput. The vehicle will not move in the opposite direction of the (DRIVE or REVERSE) input.

Example

Sample vehicle behaves:

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-ros-script","title":"Vehicle Ros (script)","text":"

Vehicle Ros (script) is responsible for subscribing to messages that are vehicle control commands. The values read from the message are set on the inputs of the Vehicle (script) script.

The concept for vehicle dynamics is suitable for Autoware's autoware_auto_control_msgs/AckermannControlCommand and autoware_auto_vehicle_msgs/GearCommand messages interface usage. The script sets gear, steering angle of wheels and acceleration of the vehicle (read from the aforementioned messages) to the Vehicle (script) input. In the case of VehicleEmergencyStamped message it sets the absolute acceleration equal to 0. In addition, also through Vehicle (script), the appropriate lights are turned on and off depending on TurnIndicatorsCommand and HazardLightsCommand messages.

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Vehicle/EgoVehicle/#subscribed-topics","title":"Subscribed Topics","text":" Category Topic Message type Frequency (Autoware dependent) TurnIndicatorsCommand /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/TurnIndicatorsCommand 10 HazardLightsCommand /control/command/hazard_lights_cmd autoware_auto_vehicle_msgs/HazardLightsCommand 10 AckermannControlCommand /control/command/control_cmd autoware_auto_control_msgs/AckermannControlCommand 60 GearCommand /control/command/gear_cmd autoware_auto_vehicle_msgs/GearCommand 10 VehicleEmergencyStamped /control/command/emergency_cmd tier4_vehicle_msgs/msg/VehicleEmergencyStamped 60

ROS2 Topics

If you would like to know all the topics used in communication Autoware with AWSIM, we encourage you to familiarize yourself with this section

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-keyboard-script","title":"Vehicle Keyboard (script)","text":"

Vehicle Keyboard (script) allows EgoVehicle to be controlled by the keyboard. Thanks to this, it is possible to switch on the appropriate gear of the gearbox, turn the lights on/off, set the acceleration and steering of the wheels. It's all set in the Vehicle (script) of the object assigned in the Vehicle field. The table below shows the available control options.

Button Option d Switch to move forward (drive gear) r Switch to move backwards (reverse gear) n Switch to neutral p Switch to parking gear UP ARROW Forward acceleration DOWN ARROW Reverse acceleration (decelerate) LEFT/RIGHT ARROW Turning 1 Turn left blinker on (right off) 2 Turn right blinker on (left off) 3 Turn on hazard lights 4 Turn off blinker or hazard lights

WASD

Controlling the movement of the vehicle with WASD as the equivalent of arrow keys is acceptable, but remember that the d button engages the drive gear.

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level_2","title":"Elements configurable from the editor level","text":"

Value limits

Max Acceleration and Max Steer Angle values greater than those set in the Vehicle (script) are limited by the script itself - they will not be exceeded.

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-visual-effect-script","title":"Vehicle Visual Effect (script)","text":"

This part of the settings is related to the configuration of the emission of materials when a specific lighting is activated. There are 4 types of lights: Brake, Left Turn Signal, Right Turn Signal and Reverse. Each of the lights has its visual equivalent in the form of a Mesh. In the case of EgoVehicle, each light type has its own GameObject which contains the Mesh assigned.

For each type of light, the appropriate Material Index (equivalent of element index in mesh) and Lighting Color are assigned - yellow for Turn Signals, red for Break and white for Reverse.

Lighting Intensity values are also configured - the greater the value, the more light will be emitted. This value is related to Lighting Exposure Weight parameter that is a exposure weight - the lower the value, the more light is emitted.

All types of lighting are switched on and off depending on the values obtained from the Vehicle (script) of the vehicle, which is assigned in the Vehicle field.

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level_3","title":"Elements configurable from the editor level","text":""},{"location":"Components/Vehicle/FollowCamera/","title":"FollowCamera","text":""},{"location":"Components/Vehicle/FollowCamera/#introduction","title":"Introduction","text":"

The FollowCamera component is designed to track a specified target object within the scene. It is attached to the main camera and maintains a defined distance and height from the target. Additionally, it offers the flexibility of custom rotation around the target as an optional feature.

"},{"location":"Components/Vehicle/FollowCamera/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Vehicle/FollowCamera/#required-member","title":"Required member","text":""},{"location":"Components/Vehicle/FollowCamera/#base-setttings","title":"Base Setttings","text":""},{"location":"Components/Vehicle/FollowCamera/#optional-movement-setttings","title":"Optional Movement Setttings","text":""},{"location":"Components/Vehicle/FollowCamera/#rotate-around-mode","title":"Rotate Around Mode","text":"

Camera rotation around the target can be activated by pressing the RotateAroundModeToggle key (default 'C' key). In this mode, the user can manually adjust the camera view at run-time using the mouse. To deactivate the Rotate Around mode, press the RotateAroundModeToggle key once more.

In the Rotate Around Mode camera view can be controlled as follows:

"},{"location":"Components/Vehicle/FollowCamera/#optional","title":"Optional","text":"

An optional prefab featuring a UI panel, located at Assets/Prefabs/UI/MainCameraView.prefab, can be used to showcase a user guide. To integrate this prefab into the scene, drag and drop it beneath the Canvas object. This prefab displays instructions on how to adjust the camera view whenever the Rotate Around Mode is activated.

"},{"location":"Components/Vehicle/URDFAndSensors/","title":"URDF And Sensors","text":""},{"location":"Components/Vehicle/URDFAndSensors/#urdf-and-sensors","title":"URDF and Sensors","text":"

This section describes the placement of sensors in EgoVehicle on the example of a Lexus RX450h 2015 Sample Sensor prefab.

URDF (Unified Robot Description Format) is equivalent to the simplified URDF format used in ROS2. This format allows to define the positions of all sensors of the vehicle in relation to its main parent prefab coordinate system.

URDF is added directly to the main parent of the prefab and there are no transforms between these objects. It is built using multiple GameObjects as children appropriately transformed with relation to its parent.

The transforms in the URDF object are defined using the data from the sensor kit documentation used in the vehicle. Such data can be obtained from sensor kit packages for Autoware, for example: awsim_sensor_kit_launch - it is used in the AWSIM compatible version of Autoware. This package contains a description of transforms between coordinate systems (frames) in the form of *.yaml files: sensors_calibration and sensor_kit_calibration.

In the first file, the transform of the sensor kit frame (sensor_kit_base_link) relative to the local vehicle frame (base_link) is defined. In Unity, this transform is defined in the object Sensor Kit. While the second file contains a definition of the transformations of all sensors with respect to the sensor kit - they are described in the Sensor Kit subsections.

Transformations

Please note that the transformation Objects are intended to be a direct reflection of frames existing in ROS2. All frame Objects are defined as children of base_link and consist of nothing but a transformation - analogical to the one present in ROS2 (keep in mind the coordinate system conversion). The sensor Objects are added to the transformation Object with no transformation of their own.

Coordinate system conventions

Unity uses a left-handed convention for its coordinate system, while the ROS2 uses a right-handed convention. For this reason, you should remember to perform conversions to get the correct transforms.

"},{"location":"Components/Vehicle/URDFAndSensors/#base-link","title":"Base Link","text":"

Base Link (frame named base_link) is the formalized local coordinate system in URDF. All sensors that publish data specified in some frame present in Autoware are defined in relation to base_link. It is a standard practice in ROS, that base_link is a parent transformation of the whole robot and all robot parts are defined in some relation to the base_link.

If any device publishes data in the base_link frame - it is added as a direct child, with no additional transformation intermediate Object (PoseSensor is an example). However, if this device has its own frame, it is added as a child to its frame Object - which provides an additional transformation. The final transformation can consist of many intermediate transformation Objects. The frame Objects are added to the base_link (GnssSensor and its parent gnss_link are an example).

"},{"location":"Components/Vehicle/URDFAndSensors/#sensor-kit","title":"Sensor Kit","text":"

Sensor Kit (frame named sensor_kit_base_link) is a set of objects that consists of all simulated sensors that are physically present in an autonomous vehicle and have their own coordinate system (frame). This set of sensors has its own frame sensor_kit_base_link that is relative to the base_link.

In the Lexus RX450h 2015 Sample Sensor prefab, it is added to the base_link GameObject with an appropriately defined transformation. It acts as an intermediate frame GameObject. Sensor Kit is located on the top of the vehicle, so it is significantly shifted about the Oy and Oz axes. Sensors can be defined directly in this Object (VelodyneVLP16 is an example), or have their own transformation Object added on top of the sensor_kit_base_link (like GnssSensor mentioned in the base_link section).

The sensors described in this subsection are defined in relation to the sensor_kit_base_link frame.

"},{"location":"Components/Vehicle/URDFAndSensors/#lidars","title":"LiDARs","text":"

LidarSensor is the component that simulates the LiDAR (Light Detection and Ranging) sensor. The LiDARs mounted on the top of autonomous vehicles are primarily used to scan the environment for localization in space and for detection and identification of obstacles. LiDARs placed on the left and right sides of the vehicle are mainly used to monitor the traffic lane and detect vehicles moving in adjacent lanes. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor prefab has one VelodyneVLP16 prefab sensor configured on the top of the vehicle, mainly used for location in space, but also for object recognition. Since the top LiDAR publishes data directly in the sensor_kit_base_link frame, the prefab is added directly to it - there is no transform. The other two remaining LiDARs are defined, but disabled - they do not provide data from space (but you can enable them!).

"},{"location":"Components/Vehicle/URDFAndSensors/#top","title":"Top","text":""},{"location":"Components/Vehicle/URDFAndSensors/#left-disabled","title":"Left - disabled","text":""},{"location":"Components/Vehicle/URDFAndSensors/#right-disabled","title":"Right - disabled","text":""},{"location":"Components/Vehicle/URDFAndSensors/#imu","title":"IMU","text":"

IMUSensor is a component that simulates an IMU (Inertial Measurement Unit) sensor. It measures acceleration and angular velocity of the EgoVehicle. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor has one such sensor located on the top of the vehicle. It is added to an Object tamagawa/imu_link that matches its frame_id and contains its transform with respect to sensor_kit_base_link. This transformation has no transition, but only rotation around the Oy and Oz axes. The transform is defined in such a way that its axis Oy points downwards - in accordance with the gravity vector.

"},{"location":"Components/Vehicle/URDFAndSensors/#gnss","title":"GNSS","text":"

GnssSensor is a component which simulates the position of vehicle computed by the Global Navigation Satellite. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor prefab has one such sensor located on top of the vehicle. It is added to an Object gnss_link that matches its frame_id and contains its transform with respect to sensor_kit_base_link. The frame is slightly moved back along the Oy and Oz axes.

"},{"location":"Components/Vehicle/URDFAndSensors/#camera","title":"Camera","text":"

CameraSensor is a component that simulates an RGB camera. Autonomous vehicles can be equipped with many cameras used for various purposes. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor prefab has one camera, positioned on top of the vehicle in such a way that the cameras field of view provides an image including traffic lights - the status of which must be recognized by Autoware. It is added to an Object traffic_light_left_camera/camera_link that matches its frame_id and contains its transform with respect to sensor_kit_base_link.

"},{"location":"Components/Vehicle/URDFAndSensors/#pose","title":"Pose","text":"

PoseSensor is a component which provides access to the current position and rotation of the EgoVehicle - is added as a ground truth.

The position and orientation of EgoVehicle is defined as the position of the frame base_link in the global frame, so this Object is added directly as its child without a transform.

"},{"location":"Components/Vehicle/URDFAndSensors/#vehiclesensor","title":"VehicleSensor","text":"

VehicleStatusSensor is a component that is designed to aggregate information about the current state of the EgoVehicle, such as the active control mode, vehicle speed, steering of its wheels, or turn signal status. A detailed description of this sensor is available in this section.

This Object is not strictly related to any frame, however, it is assumed as a sensor, therefore it is added to the URDF.

"},{"location":"DeveloperGuide/Contact/","title":"Contact","text":""},{"location":"DeveloperGuide/Contact/#contact","title":"Contact","text":"

English/\u65e5\u672c\u8a9e OK

e-mail : takatoki.makino@tier4.jp

twitter : https://twitter.com/mackierx111

"},{"location":"DeveloperGuide/Documentation/","title":"Documentation","text":""},{"location":"DeveloperGuide/Documentation/#documentation","title":"Documentation","text":"

This document uses Material for MkDocs.

"},{"location":"DeveloperGuide/Documentation/#local-hosting","title":"Local hosting","text":"

1 Install Material for MkDocs.

$ pip install mkdocs-material\n
2 Hosting on localhost.
$ cd AWSIM\n$ mkdocs serve\nINFO     -  Building documentation...\nINFO     -  Cleaning site directory\nINFO     -  Documentation built in 0.16 seconds\nINFO     -  [03:13:22] Watching paths for changes: 'docs', 'mkdocs.yml'\nINFO     -  [03:13:22] Serving on http://127.0.0.1:8000/\n

3 Access http://127.0.0.1:8000/ with a web browser.

For further reference see Material for MkDocs - Getting started.

"},{"location":"DeveloperGuide/Documentation/#mkdocs-files","title":"MkDocs files","text":"

Use the following /docs directory and mkdocs.yml for new documentation files.

AWSIM\n\u251c\u2500 docs/                // markdown and image file for each document.\n\u2514\u2500 mkdocs.yml           // mkdocs config.\n
Create one directory per document. For example, the directory structure of this \"Documentation\" page might look like this.
AWSIM\n\u2514\u2500 docs/                            // Root of all documents\n    \u2514\u2500 DeveloperGuide               // Category\n        \u2514\u2500 Documentation            // Root of each document\n            \u251c\u2500 index.md             // Markdown file\n            \u2514\u2500 image_0.png          // Images used in markdown file\n
"},{"location":"DeveloperGuide/Documentation/#deploy-hosting","title":"Deploy & Hosting","text":"

When docs are pushed to the main branch, they are deployed to GitHub Pages using GitHub Actions. See also Material for MkDocs - Publishing your site

"},{"location":"DeveloperGuide/EditorSetup/Graphy/","title":"Graphy Asset Setup","text":""},{"location":"DeveloperGuide/EditorSetup/Graphy/#graphy-asset-setup","title":"Graphy Asset Setup","text":""},{"location":"DeveloperGuide/EditorSetup/Graphy/#add-graphy-from-asset-store","title":"Add Graphy from Asset Store","text":"

1) Go to Unity Asset Store and add Graphy to personal assets.

Graphy Asset Store link:

"},{"location":"DeveloperGuide/EditorSetup/Graphy/#add-graphy-to-the-unity-editor","title":"Add Graphy to the Unity Editor","text":"

1) Open up the Unity Editor. - Open up a temporary new scene by File -> New Scene -> Empty(Built-in) -> Create - This is due to a bug with Unity crashing on certain Linux configurations. - Once the package is imported, you can open up the desired scene. 2) Go to the Window menu and select Package Manager. 3) Make sure the My Assets tab is selected from the top left of the Package Manager window. 4) Find & select the Graphy from the list and click Download or Import from the bottom left of the Package Manager window. 5) There will be a popup window showing contents of the package. Click Import to add Graphy to the project.

After the import is complete, you should be able to see Graphy prefab in the Hierarchy window of the AutowareSimulation scene. If it's missing you can add it to scene by following steps below.

"},{"location":"DeveloperGuide/EditorSetup/Graphy/#integrating-graphy-into-custom-scenes","title":"Integrating Graphy into custom scenes","text":"

Graphy is pre-integrated within the AutowareSimulation scene. To incorporate Graphy into your own custom scenes, please adhere to the following steps:

1) Go to the Assets folder in the Project window. 2) Open Graphy > Prefab folder. 3) Drag the Graphy prefab into the scene. 4) You can customize your Graphy by selecting the Graphy prefab in the scene and changing the settings in the inspector window.

"},{"location":"DeveloperGuide/EditorSetup/Graphy/#useful-links","title":"Useful links:","text":"

Unity Package manager:

Graphy Github page:

Graphy Documentation:

"},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/","title":"Rider Configuration","text":""},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#jetbrains-rider-setup-with-unity","title":"JetBrains Rider setup with Unity","text":""},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#install-jetbrains-rider","title":"Install JetBrains Rider:","text":"

Follow the steps in:

sudo snap install rider --classic\n
"},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#install-net-sdk","title":"Install .NET SDK:","text":"

Follow the steps in:

# Get Ubuntu version\ndeclare repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '\"'; fi)\n\n# Download Microsoft signing key and repository\nwget https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb -O packages-microsoft-prod.deb\n\n# Install Microsoft signing key and repository\nsudo dpkg -i packages-microsoft-prod.deb\n\n# Clean up\nrm packages-microsoft-prod.deb\n\n# Update packages\nsudo apt update\nsudo apt install dotnet-sdk-8.0\n
"},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#connect-rider-to-unity-editor","title":"Connect Rider to Unity Editor:","text":"

Follow the steps in:

1) Open an existing Unity project in the Unity Editor.\n\n2) Select Edit > Preferences (Unity > Settings on macOS) and open the External Tools page.\n\n3) In the External Script Editor, select a \"Rider\" installation.\n\n4) In the Preferences window, click \"Regenerate project files\" under the External Tools section.\n\n5) While still in the Unity Editor, right-click anywhere in the Project view and select Open C# Project.\n\n6) Rider will start automatically and open the solution related to this Unity project. Once the solution is loaded, Rider and the Unity Editor become connected. The Unity icon on the toolbar shows the current connection status:\n
"},{"location":"DeveloperGuide/EditorSetup/VSCode/","title":"VSCode Configuration","text":""},{"location":"DeveloperGuide/EditorSetup/VSCode/#visual-studio-code-setup-with-unity","title":"Visual Studio Code setup with Unity","text":""},{"location":"DeveloperGuide/EditorSetup/VSCode/#install-visual-studio-code","title":"Install Visual Studio Code","text":"

Follow the steps in: - https://code.visualstudio.com/docs/setup/linux

# Install the keys and repository\nsudo apt-get install wget gpg\nwget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg\nsudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg\nsudo sh -c 'echo \"deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main\" > /etc/apt/sources.list.d/vscode.list'\nrm -f packages.microsoft.gpg\n\n# Then update the package cache and install the package using:\nsudo apt install apt-transport-https\nsudo apt update\nsudo apt install code\n
"},{"location":"DeveloperGuide/EditorSetup/VSCode/#install-the-dotnet-sdk","title":"Install the Dotnet SDK","text":"

Follow the steps in: - https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#register-the-microsoft-package-repository

# Get Ubuntu version\ndeclare repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '\"'; fi)\n\n# Download Microsoft signing key and repository\nwget https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb -O packages-microsoft-prod.deb\n\n# Install Microsoft signing key and repository\nsudo dpkg -i packages-microsoft-prod.deb\n\n# Clean up\nrm packages-microsoft-prod.deb\n\n# Update packages\nsudo apt update\n\nsudo apt install dotnet-sdk-8.0\n
"},{"location":"DeveloperGuide/EditorSetup/VSCode/#install-the-extensions","title":"Install the extensions","text":"

Follow the steps in: - https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csdevkit - https://marketplace.visualstudio.com/items?itemName=VisualStudioToolsForUnity.vstuc - https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp

Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter. - ext install ms-dotnettools.csdevkit Repeat for: - ext install VisualStudioToolsForUnity.vstuc - ext install ms-dotnettools.csharp

"},{"location":"DeveloperGuide/EditorSetup/VSCode/#configure-the-unity","title":"Configure the Unity","text":"

It should all be configured now. You can either open up a script by double clicking in the Project window in Unity or by opening up the project in VS Code: - Assets -> Open C# Project

Syntax highlighting and CTRL-click navigation should work out of the box.

For more advanced features such as debugging, check the Unity Development with VS Code Documentation.

"},{"location":"DeveloperGuide/EditorSetup/VSCode/#additional-notes","title":"Additional notes","text":"

In the AWSIM project, the package Visual Studio Editor is already installed to satisfy the requirement from the Unity for Visual Studio Code extension.

"},{"location":"DeveloperGuide/HowToContribute/","title":"How to Contribute","text":""},{"location":"DeveloperGuide/HowToContribute/#how-to-contribute","title":"How to Contribute","text":"

Everyone is welcome!

"},{"location":"DeveloperGuide/HowToContribute/#how-can-i-get-help","title":"How can I get help?","text":"

Do not open issues for general support questions as we want to keep GitHub issues for confirmed bug reports. Instead, open a discussion in the Q&A category. The trouble shooting page at AWSIM and at Autoware will be also helpful.

"},{"location":"DeveloperGuide/HowToContribute/#issue","title":"Issue","text":"

Before you post an issue, please search Issues and Discussions Q&A catecory to check if it is not a known issue.

This page is helpful how to create an issue from a repository.

"},{"location":"DeveloperGuide/HowToContribute/#bug-report","title":"Bug report","text":"

If you find a new bug, please create an issue here

"},{"location":"DeveloperGuide/HowToContribute/#feature-request","title":"Feature request","text":"

If you propose a new feature or have an idea, please create an issue here

"},{"location":"DeveloperGuide/HowToContribute/#task","title":"Task","text":"

If you have plan to contribute AWSIM Labs, please create an issue here.

"},{"location":"DeveloperGuide/HowToContribute/#question","title":"Question","text":""},{"location":"DeveloperGuide/HowToContribute/#pull-requests","title":"Pull requests","text":"

If you have an idea to improve the simulation, you can submit a pull request. The following process should be followed:

  1. Create a derived branch (feature/***) from the main branch.
  2. Create a pull request to the main branch.

Please keep the following in mind, while developing new features:

"},{"location":"DeveloperGuide/License/","title":"License","text":""},{"location":"DeveloperGuide/License/#awsim-licenses","title":"AWSIM Licenses","text":"

AWSIM License applies to tier4/AWSIM repositories and all content contained in the Releases.

"},{"location":"DeveloperGuide/License/#apache20-license","title":"Apache2.0 License","text":"
**********************************************************************************\n\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2022 TIER IV, Inc.\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n
"},{"location":"DeveloperGuide/License/#cc-by-nc-license","title":"CC BY-NC License","text":"
**********************************************************************************\n\nAttribution-NonCommercial 4.0 International\n\n=======================================================================\n\nCreative Commons Corporation (\"Creative Commons\") is not a law firm and\ndoes not provide legal services or legal advice. Distribution of\nCreative Commons public licenses does not create a lawyer-client or\nother relationship. Creative Commons makes its licenses and related\ninformation available on an \"as-is\" basis. Creative Commons gives no\nwarranties regarding its licenses, any material licensed under their\nterms and conditions, or any related information. Creative Commons\ndisclaims all liability for damages resulting from their use to the\nfullest extent possible.\n\nUsing Creative Commons Public Licenses\n\nCreative Commons public licenses provide a standard set of terms and\nconditions that creators and other rights holders may use to share\noriginal works of authorship and other material subject to copyright\nand certain other rights specified in the public license below. The\nfollowing considerations are for informational purposes only, are not\nexhaustive, and do not form part of our licenses.\n\n     Considerations for licensors: Our public licenses are\n     intended for use by those authorized to give the public\n     permission to use material in ways otherwise restricted by\n     copyright and certain other rights. Our licenses are\n     irrevocable. Licensors should read and understand the terms\n     and conditions of the license they choose before applying it.\n     Licensors should also secure all rights necessary before\n     applying our licenses so that the public can reuse the\n     material as expected. Licensors should clearly mark any\n     material not subject to the license. This includes other CC-\n     licensed material, or material used under an exception or\n     limitation to copyright. More considerations for licensors:\n    wiki.creativecommons.org/Considerations_for_licensors\n\n     Considerations for the public: By using one of our public\n     licenses, a licensor grants the public permission to use the\n     licensed material under specified terms and conditions. If\n     the licensor's permission is not necessary for any reason--for\n     example, because of any applicable exception or limitation to\n     copyright--then that use is not regulated by the license. Our\n     licenses grant only permissions under copyright and certain\n     other rights that a licensor has authority to grant. Use of\n     the licensed material may still be restricted for other\n     reasons, including because others have copyright or other\n     rights in the material. A licensor may make special requests,\n     such as asking that all changes be marked or described.\n     Although not required by our licenses, you are encouraged to\n     respect those requests where reasonable. More considerations\n     for the public:\n    wiki.creativecommons.org/Considerations_for_licensees\n\n=======================================================================\n\nCreative Commons Attribution-NonCommercial 4.0 International Public\nLicense\n\nBy exercising the Licensed Rights (defined below), You accept and agree\nto be bound by the terms and conditions of this Creative Commons\nAttribution-NonCommercial 4.0 International Public License (\"Public\nLicense\"). To the extent this Public License may be interpreted as a\ncontract, You are granted the Licensed Rights in consideration of Your\nacceptance of these terms and conditions, and the Licensor grants You\nsuch rights in consideration of benefits the Licensor receives from\nmaking the Licensed Material available under these terms and\nconditions.\n\n\nSection 1 -- Definitions.\n\n  a. Adapted Material means material subject to Copyright and Similar\n     Rights that is derived from or based upon the Licensed Material\n     and in which the Licensed Material is translated, altered,\n     arranged, transformed, or otherwise modified in a manner requiring\n     permission under the Copyright and Similar Rights held by the\n     Licensor. For purposes of this Public License, where the Licensed\n     Material is a musical work, performance, or sound recording,\n     Adapted Material is always produced where the Licensed Material is\n     synched in timed relation with a moving image.\n\n  b. Adapter's License means the license You apply to Your Copyright\n     and Similar Rights in Your contributions to Adapted Material in\n     accordance with the terms and conditions of this Public License.\n\n  c. Copyright and Similar Rights means copyright and/or similar rights\n     closely related to copyright including, without limitation,\n     performance, broadcast, sound recording, and Sui Generis Database\n     Rights, without regard to how the rights are labeled or\n     categorized. For purposes of this Public License, the rights\n     specified in Section 2(b)(1)-(2) are not Copyright and Similar\n     Rights.\n  d. Effective Technological Measures means those measures that, in the\n     absence of proper authority, may not be circumvented under laws\n     fulfilling obligations under Article 11 of the WIPO Copyright\n     Treaty adopted on December 20, 1996, and/or similar international\n     agreements.\n\n  e. Exceptions and Limitations means fair use, fair dealing, and/or\n     any other exception or limitation to Copyright and Similar Rights\n     that applies to Your use of the Licensed Material.\n\n  f. Licensed Material means the artistic or literary work, database,\n     or other material to which the Licensor applied this Public\n     License.\n\n  g. Licensed Rights means the rights granted to You subject to the\n     terms and conditions of this Public License, which are limited to\n     all Copyright and Similar Rights that apply to Your use of the\n     Licensed Material and that the Licensor has authority to license.\n\n  h. Licensor means the individual(s) or entity(ies) granting rights\n     under this Public License.\n\n  i. NonCommercial means not primarily intended for or directed towards\n     commercial advantage or monetary compensation. For purposes of\n     this Public License, the exchange of the Licensed Material for\n     other material subject to Copyright and Similar Rights by digital\n     file-sharing or similar means is NonCommercial provided there is\n     no payment of monetary compensation in connection with the\n     exchange.\n\n  j. Share means to provide material to the public by any means or\n     process that requires permission under the Licensed Rights, such\n     as reproduction, public display, public performance, distribution,\n     dissemination, communication, or importation, and to make material\n     available to the public including in ways that members of the\n     public may access the material from a place and at a time\n     individually chosen by them.\n\n  k. Sui Generis Database Rights means rights other than copyright\n     resulting from Directive 96/9/EC of the European Parliament and of\n     the Council of 11 March 1996 on the legal protection of databases,\n     as amended and/or succeeded, as well as other essentially\n     equivalent rights anywhere in the world.\n\n  l. You means the individual or entity exercising the Licensed Rights\n     under this Public License. Your has a corresponding meaning.\n\n\nSection 2 -- Scope.\n\n  a. License grant.\n\n       1. Subject to the terms and conditions of this Public License,\n          the Licensor hereby grants You a worldwide, royalty-free,\n          non-sublicensable, non-exclusive, irrevocable license to\n          exercise the Licensed Rights in the Licensed Material to:\n\n            a. reproduce and Share the Licensed Material, in whole or\n               in part, for NonCommercial purposes only; and\n\n            b. produce, reproduce, and Share Adapted Material for\n               NonCommercial purposes only.\n\n       2. Exceptions and Limitations. For the avoidance of doubt, where\n          Exceptions and Limitations apply to Your use, this Public\n          License does not apply, and You do not need to comply with\n          its terms and conditions.\n\n       3. Term. The term of this Public License is specified in Section\n          6(a).\n\n       4. Media and formats; technical modifications allowed. The\n          Licensor authorizes You to exercise the Licensed Rights in\n          all media and formats whether now known or hereafter created,\n          and to make technical modifications necessary to do so. The\n          Licensor waives and/or agrees not to assert any right or\n          authority to forbid You from making technical modifications\n          necessary to exercise the Licensed Rights, including\n          technical modifications necessary to circumvent Effective\n          Technological Measures. For purposes of this Public License,\n          simply making modifications authorized by this Section 2(a)\n          (4) never produces Adapted Material.\n\n       5. Downstream recipients.\n\n            a. Offer from the Licensor -- Licensed Material. Every\n               recipient of the Licensed Material automatically\n               receives an offer from the Licensor to exercise the\n               Licensed Rights under the terms and conditions of this\n               Public License.\n\n            b. No downstream restrictions. You may not offer or impose\n               any additional or different terms or conditions on, or\n               apply any Effective Technological Measures to, the\n               Licensed Material if doing so restricts exercise of the\n               Licensed Rights by any recipient of the Licensed\n               Material.\n\n       6. No endorsement. Nothing in this Public License constitutes or\n          may be construed as permission to assert or imply that You\n          are, or that Your use of the Licensed Material is, connected\n          with, or sponsored, endorsed, or granted official status by,\n          the Licensor or others designated to receive attribution as\n          provided in Section 3(a)(1)(A)(i).\n\n  b. Other rights.\n\n       1. Moral rights, such as the right of integrity, are not\n          licensed under this Public License, nor are publicity,\n          privacy, and/or other similar personality rights; however, to\n          the extent possible, the Licensor waives and/or agrees not to\n          assert any such rights held by the Licensor to the limited\n          extent necessary to allow You to exercise the Licensed\n          Rights, but not otherwise.\n\n       2. Patent and trademark rights are not licensed under this\n          Public License.\n\n       3. To the extent possible, the Licensor waives any right to\n          collect royalties from You for the exercise of the Licensed\n          Rights, whether directly or through a collecting society\n          under any voluntary or waivable statutory or compulsory\n          licensing scheme. In all other cases the Licensor expressly\n          reserves any right to collect such royalties, including when\n          the Licensed Material is used other than for NonCommercial\n          purposes.\n\n\nSection 3 -- License Conditions.\n\nYour exercise of the Licensed Rights is expressly made subject to the\nfollowing conditions.\n\n  a. Attribution.\n\n       1. If You Share the Licensed Material (including in modified\n          form), You must:\n\n            a. retain the following if it is supplied by the Licensor\n               with the Licensed Material:\n\n                 i. identification of the creator(s) of the Licensed\n                    Material and any others designated to receive\n                    attribution, in any reasonable manner requested by\n                    the Licensor (including by pseudonym if\n                    designated);\n\n                ii. a copyright notice;\n\n               iii. a notice that refers to this Public License;\n\n                iv. a notice that refers to the disclaimer of\n                    warranties;\n\n                 v. a URI or hyperlink to the Licensed Material to the\n                    extent reasonably practicable;\n\n            b. indicate if You modified the Licensed Material and\n               retain an indication of any previous modifications; and\n\n            c. indicate the Licensed Material is licensed under this\n               Public License, and include the text of, or the URI or\n               hyperlink to, this Public License.\n\n       2. You may satisfy the conditions in Section 3(a)(1) in any\n          reasonable manner based on the medium, means, and context in\n          which You Share the Licensed Material. For example, it may be\n          reasonable to satisfy the conditions by providing a URI or\n          hyperlink to a resource that includes the required\n          information.\n\n       3. If requested by the Licensor, You must remove any of the\n          information required by Section 3(a)(1)(A) to the extent\n          reasonably practicable.\n\n       4. If You Share Adapted Material You produce, the Adapter's\n          License You apply must not prevent recipients of the Adapted\n          Material from complying with this Public License.\n\n\nSection 4 -- Sui Generis Database Rights.\n\nWhere the Licensed Rights include Sui Generis Database Rights that\napply to Your use of the Licensed Material:\n\n  a. for the avoidance of doubt, Section 2(a)(1) grants You the right\n     to extract, reuse, reproduce, and Share all or a substantial\n     portion of the contents of the database for NonCommercial purposes\n     only;\n\n  b. if You include all or a substantial portion of the database\n     contents in a database in which You have Sui Generis Database\n     Rights, then the database in which You have Sui Generis Database\n     Rights (but not its individual contents) is Adapted Material; and\n\n  c. You must comply with the conditions in Section 3(a) if You Share\n     all or a substantial portion of the contents of the database.\n\nFor the avoidance of doubt, this Section 4 supplements and does not\nreplace Your obligations under this Public License where the Licensed\nRights include other Copyright and Similar Rights.\n\n\nSection 5 -- Disclaimer of Warranties and Limitation of Liability.\n\n  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE\n     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS\n     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF\n     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,\n     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,\n     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR\n     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,\n     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT\n     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT\n     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.\n\n  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE\n     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,\n     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,\n     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,\n     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR\n     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN\n     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR\n     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR\n     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.\n\n  c. The disclaimer of warranties and limitation of liability provided\n     above shall be interpreted in a manner that, to the extent\n     possible, most closely approximates an absolute disclaimer and\n     waiver of all liability.\n\n\nSection 6 -- Term and Termination.\n\n  a. This Public License applies for the term of the Copyright and\n     Similar Rights licensed here. However, if You fail to comply with\n     this Public License, then Your rights under this Public License\n     terminate automatically.\n\n  b. Where Your right to use the Licensed Material has terminated under\n     Section 6(a), it reinstates:\n\n       1. automatically as of the date the violation is cured, provided\n          it is cured within 30 days of Your discovery of the\n          violation; or\n\n       2. upon express reinstatement by the Licensor.\n\n     For the avoidance of doubt, this Section 6(b) does not affect any\n     right the Licensor may have to seek remedies for Your violations\n     of this Public License.\n\n  c. For the avoidance of doubt, the Licensor may also offer the\n     Licensed Material under separate terms or conditions or stop\n     distributing the Licensed Material at any time; however, doing so\n     will not terminate this Public License.\n\n  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public\n     License.\n\n\nSection 7 -- Other Terms and Conditions.\n\n  a. The Licensor shall not be bound by any additional or different\n     terms or conditions communicated by You unless expressly agreed.\n\n  b. Any arrangements, understandings, or agreements regarding the\n     Licensed Material not stated herein are separate from and\n     independent of the terms and conditions of this Public License.\n\n\nSection 8 -- Interpretation.\n\n  a. For the avoidance of doubt, this Public License does not, and\n     shall not be interpreted to, reduce, limit, restrict, or impose\n     conditions on any use of the Licensed Material that could lawfully\n     be made without permission under this Public License.\n\n  b. To the extent possible, if any provision of this Public License is\n     deemed unenforceable, it shall be automatically reformed to the\n     minimum extent necessary to make it enforceable. If the provision\n     cannot be reformed, it shall be severed from this Public License\n     without affecting the enforceability of the remaining terms and\n     conditions.\n\n  c. No term or condition of this Public License will be waived and no\n     failure to comply consented to unless expressly agreed to by the\n     Licensor.\n\n  d. Nothing in this Public License constitutes or may be interpreted\n     as a limitation upon, or waiver of, any privileges and immunities\n     that apply to the Licensor or You, including from the legal\n     processes of any jurisdiction or authority.\n\n=======================================================================\n\nCreative Commons is not a party to its public\nlicenses. Notwithstanding, Creative Commons may elect to apply one of\nits public licenses to material it publishes and in those instances\nwill be considered the \u201cLicensor.\u201d The text of the Creative Commons\npublic licenses is dedicated to the public domain under the CC0 Public\nDomain Dedication. Except for the limited purpose of indicating that\nmaterial is shared under a Creative Commons public license or as\notherwise permitted by the Creative Commons policies published at\ncreativecommons.org/policies, Creative Commons does not authorize the\nuse of the trademark \"Creative Commons\" or any other trademark or logo\nof Creative Commons without its prior written consent including,\nwithout limitation, in connection with any unauthorized modifications\nto any of its public licenses or any other arrangements,\nunderstandings, or agreements concerning use of licensed material. For\nthe avoidance of doubt, this paragraph does not form part of the\npublic licenses.\n\nCreative Commons may be contacted at creativecommons.org\n
"},{"location":"DeveloperGuide/TroubleShooting/","title":"Trouble shooting","text":""},{"location":"DeveloperGuide/TroubleShooting/#trouble-shooting","title":"Trouble shooting","text":"

This document describes the most common errors encountered when working with AWSIm or autoware.

Trouble Solution Massive output of Plugins errors git clone the AWSIM repository again error : RuntimeError: error not set, at C:\\ci\\ws\\src\\ros2\\rcl\\rcl\\src\\rcl\\node.c:262 Set up environment variables and config around ROS2 correctly. For example: - Environment variables - cyclonedds_config.xml $ ros2 topic list is not displayed - your machine ROS_DOMAIN_ID is different- ROS2 is not sourced Using AWSIM on Windows and Autoware on Ubuntu. $ ros2 topic list is not displayed. Allow the communication in Windows Firewall self-driving stops in the middle of the road. Check if your map data is correct (PointCloud, VectorMap, 3D fbx models) Connecting AWSIM and Autoware results in bad network Make ros local host-only. Include the following in the .bashrc (The password will be requested at terminal startup after OS startup.) export ROS_LOCALHOST_ONLY=1export RMW_IMPLEMENTATION=rmw_cyclonedds_cppif [ ! -e /tmp/cycloneDDS_configured ]; thensudo sysctl -w net.core.rmem_max=2147483647sudo ip link set lo multicast ontouch /tmp/cycloneDDS_configuredfi Lidar (colored pointcloud on RViz ) does not show. Reduce processing load by following command. This can only be applied to Autoware's awsim-stable branch. cd <path_to_your_autoware_folder>wget \"https://drive.google.com/uc?export=download&id=11mkwfg-OaXIp3Z5c3R58Pob3butKwE1Z\" -O patch.shbash patch.sh && rm patch.sh Error when starting AWSIM binary. segmentation fault (core dumped) - Check if yourNvidia drivers or Vulkan API are installed correctly - When building binary please pay attantion whether the Graphic Jobs option in Player Settings is disabled. It should be disabled since it may produce segmentation fault errors. Please check forum for more details. Initial pose does not match automatically. Set initial pose manually. Unity crashes and check the log for the cause of the error. Editor log file locationWindows : C:\\Users\\username\\AppData\\Local\\Unity\\Editor\\Editor.logLinux : ~/.config/unity3d/.Editor.log Player log file location Windows : C:\\Users\\username\\AppData\\LocalLow\\CompanyName\\ProductName\\output_log.txtLinux :~/.config/unity3d/CompanyName/ProductName/Player.logSee also : Unity Documentation - Log Files Safe mode dialog appears when starting UnityEditor. or error : No usable version of libssl was found 1. download libssl $ wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5.11_amd64.deb 2. install sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu5.11_amd64.deb (Windows) Unity Editor's error:Plugins: Failed to load 'Assets/RGLUnityPlugin/Plugins/Windows/x86_64/RobotecGPULidar.dll' because one or more of its dependencies could not be loaded. Install Microsoft Visual C++ Redistributable packages for Visual Studio 2015, 2017, 2019, and 2022 (X64 Architecture) (Windows) Built-binary or Unity Editor freeze when simulation started Update/Install latest NIC(NetworkInterfaceCard) drivers for your PC.Especially, if you can find latest drivers provided by chip vendors for the interfaces (not by Microsoft), we recommend vendors' drivers."},{"location":"GettingStarted/QuickStartDemo/","title":"Quick Start Demo","text":""},{"location":"GettingStarted/QuickStartDemo/#quick-start-demo","title":"Quick Start Demo","text":"

Below you can find instructions on how to setup the self-driving demo of AWSIM simulation controlled by Autoware. The instruction assumes using the Ubuntu OS.

"},{"location":"GettingStarted/QuickStartDemo/#demo-configuration","title":"Demo configuration","text":"

The simulation provided in the AWSIM demo is configured as follows:

AWSIM Demo Settings Vehicle Lexus RX 450h Environment Japan Tokyo Nishishinjuku Sensors GNSS, IMU, 3 x VLP16, Traffic Light Camera Traffic Randomized traffic ROS2 humble"},{"location":"GettingStarted/QuickStartDemo/#pc-specs","title":"PC specs","text":"

Please make sure that your machine meets the following requirements in order to run the simulation correctly:

Required PC Specs OS Ubuntu 22.04 CPU 6 cores and 12 thread or higher GPU RTX 2080Ti or higher Nvidia Driver (Ubuntu 22) >=545"},{"location":"GettingStarted/QuickStartDemo/#dds-configuration","title":"DDS configuration","text":"

In order to run AWSIM Labs with the best performance and without hogging the network, please follow the steps below.

Add the following lines to ~/.bashrc file:

if [ ! -e /tmp/cycloneDDS_configured ]; then\n    sudo sysctl -w net.core.rmem_max=2147483647\n    sudo sysctl -w net.ipv4.ipfrag_time=3\n    sudo sysctl -w net.ipv4.ipfrag_high_thresh=134217728     # (128 MB)\n    sudo ip link set lo multicast on\n    touch /tmp/cycloneDDS_configured\nfi\n

Every time you restart this machine, and open a new terminal, the above commands will be executed.

Until you restart the machine, they will not be executed again.

"},{"location":"GettingStarted/QuickStartDemo/#cyclonedds-configuration","title":"CycloneDDS configuration","text":"

Save the following as cyclonedds.xml in your home directory ~:

<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<CycloneDDS xmlns=\"https://cdds.io/config\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd\">\n    <Domain Id=\"any\">\n        <General>\n            <Interfaces>\n                <NetworkInterface name=\"lo\" priority=\"default\" multicast=\"default\" />\n            </Interfaces>\n            <AllowMulticast>default</AllowMulticast>\n            <MaxMessageSize>65500B</MaxMessageSize>\n        </General>\n        <Internal>\n            <SocketReceiveBufferSize min=\"10MB\"/>\n            <Watermarks>\n                <WhcHigh>500kB</WhcHigh>\n            </Watermarks>\n        </Internal>\n    </Domain>\n</CycloneDDS>\n

Make sure the following lines are added to the ~/.bashrc file:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp\nexport CYCLONEDDS_URI=/home/your_username/cyclonedds.xml\n

Replace your_username with your actual username.

Note

You should use the absolute path to the cyclonedds.xml file.

Warning

A system restart is required for these changes to work.

Warning

DO NOT set export ROS_LOCALHOST_ONLY=1. CycloneDDS configuration will be enough.

"},{"location":"GettingStarted/QuickStartDemo/#start-the-demo","title":"Start the demo","text":""},{"location":"GettingStarted/QuickStartDemo/#running-the-awsim-demo","title":"Running the AWSIM demo","text":"

To run the simulator, please follow the steps below.

  1. Install Nvidia GPU driver (Skip if already installed). 1. Add Nvidia driver to apt repository

    sudo add-apt-repository ppa:graphics-drivers/ppa\nsudo apt update\n
    2. Install the recommended version of the driver.
    sudo ubuntu-drivers autoinstall\n\n# or install a specific version (following was tested)\nsudo apt install nvidia-driver-550\n
    3. Reboot your machine to make the installed driver detected by the system.
    sudo reboot\n
    4. Open terminal and check if nvidia-smi command is available and outputs summary similar to the one presented below.
    $ nvidia-smi\n+-----------------------------------------------------------------------------------------+\n| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |\n|-----------------------------------------+------------------------+----------------------+\n| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |\n| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |\n|                                         |                        |               MIG M. |\n|=========================================+========================+======================|\n|   0  NVIDIA GeForce RTX 3080        Off |   00000000:2D:00.0  On |                  N/A |\n| 30%   40C    P8             35W /  320W |    5299MiB /  10240MiB |      7%      Default |\n|                                         |                        |                  N/A |\n+-----------------------------------------+------------------------+----------------------+\n...\n
  2. Install Vulkan Graphics Library (Skip if already installed). 1. Update the environment.

    sudo apt update\n
    2. Install the library.
    sudo apt install libvulkan1\n
  3. Download and Run AWSIM Demo binary.

    1. Download the latest release from the AWSIM Labs GitHub Release Page. AWSIM Labs GitHub Release Page

    2. Unzip the downloaded file.

    3. Make the file executable.

      Right click the `awsim_labs.x86_64` file and check the `Execute` checkbox\n\n  ![](Image_1.png)\n\n  or execute the command below.\n\n  ```\n  chmod +x <path to AWSIM folder>/awsim_labs.x86_64\n  ```\n
    1. Launch awsim_labs.x86_64.
      ./<path to AWSIM folder>/awsim_labs.x86_64\n

      It may take some time for the application to start the so please wait until image similar to the one presented below is visible in your application window.

"},{"location":"GettingStarted/QuickStartDemo/#launching-autoware","title":"Launching Autoware","text":"

In order to configure and run the Autoware software with the AWSIM demo, please:

  1. Download map files (pcd, osm) and unzip them.

    Download Map files (pcd, osm)

  2. Clone Autoware and move to the directory.

    git clone https://github.com/autowarefoundation/autoware.git\ncd autoware\n
  3. Switch branch to main.
    git checkout main\n
  4. Configure the environment. (Skip if Autoware environment has been configured before)
    ./setup-dev-env.sh\n
  5. Create the src directory and clone external dependent repositories into it.
    mkdir src\nvcs import src < autoware.repos\n
  6. Install dependent ROS packages.
    source /opt/ros/humble/setup.bash\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
  7. Build the workspace.
    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_EXPORT_COMPILE_COMMANDS=1\n
  8. Launch Autoware.
    source install/setup.bash\nros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=<absolute path of map folder>\n\n# Use the absolute path for the map folder, don't use the ~ operator.\n\n# Example:\nros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=/home/your_username/autoware_map/nishishinjuku_autoware_map\n
"},{"location":"GettingStarted/QuickStartDemo/#lets-run-the-self-driving-simulation","title":"Let's run the self-Driving simulation","text":"
  1. Launch AWSIM and Autoware according to the steps described earlier in this document.

  2. The Autoware will automatically set its pose estimation as presented below.

  3. Set the navigation goal for the vehicle.

  4. Optionally, you can define an intermediate point through which the vehicle will travel on its way to the destination. The generated path can be seen on the image below.

  5. Enable self-driving.

To make the vehicle start navigating please engage its operation using the command below.

cd autoware\nsource install/setup.bash\nros2 topic pub /autoware/engage autoware_auto_vehicle_msgs/msg/Engage '{engage: True}' -1\n

The self-driving simulation demo has been successfully launched!

"},{"location":"GettingStarted/QuickStartDemo/#troubleshooting","title":"Troubleshooting","text":"

In case of any problems with running the sample AWSIM binary with Autoware, start with checking our Troubleshooting page with the most common problems.

"},{"location":"GettingStarted/QuickStartDemo/#appendix","title":"Appendix","text":""},{"location":"GettingStarted/SetupUnityProject/","title":"Setup Unity Project","text":""},{"location":"GettingStarted/SetupUnityProject/#setup-unity-project","title":"Setup Unity Project","text":"

Info

It is advised to checkout the Quick Start Demo tutorial before reading this section.

This page is a tutorial for setting up a AWSIM Unity project.

"},{"location":"GettingStarted/SetupUnityProject/#environment-preparation","title":"Environment preparation","text":""},{"location":"GettingStarted/SetupUnityProject/#system-setup","title":"System setup","text":"Ubuntu 22Windows
  1. Make sure your machine meets the required hardware specifications. - NOTE: PC requirements may vary depending on simulation contents which may change as the simulator develops
  2. Prepare a desktop PC with Ubuntu 22.04 installed.
  3. Install Nvidia drivers and Vulkan Graphics API.
  4. Install git.
  5. Set the ROS 2 middleware and the localhost only mode in ~/.profile (or, in ~/.bash_profile or ~/bash_login if either of those exists) file:

    export ROS_LOCALHOST_ONLY=1\nexport RMW_IMPLEMENTATION=rmw_cyclonedds_cpp\n

    Warning

    A system restart is required for these changes to work.

  6. Set the system optimizations by adding this code to the very bottom of your ~/.bashrc file:

    if [ ! -e /tmp/cycloneDDS_configured ]; then\n    sudo sysctl -w net.core.rmem_max=2147483647\n    sudo ip link set lo multicast on\n    touch /tmp/cycloneDDS_configured\nfi\n

    Info

    As a result, each time you run the terminal (bash prompt), your OS will be configured for the best ROS 2 performance. Make sure you open your terminal at least one before running any instance of AWSIM (or Editor running the AWSIM).

  1. Make sure your machine meets the required hardware specifications. - NOTE: PC requirements may vary depending on simulation contents which may change as the simulator develops
  2. Prepare a desktop PC with Windows 10 or 11 (64 bit) installed.
  3. Install git.
  4. Install Microsoft Visual C++ Redistributable packages for Visual Studio 2015, 2017, 2019, and 2022 (X64 Architecture)
"},{"location":"GettingStarted/SetupUnityProject/#ros-2","title":"ROS 2","text":"

AWSIM comes with a standalone flavor of Ros2ForUnity. This means that, to avoid internal conflicts between different ROS 2 versions, you shouldn't run the Editor or AWSIM binary with ROS 2 sourced.

Warning

Do not run the AWSIM, Unity Hub, or the Editor with ROS 2 sourced.

Ubuntu 22Windows "},{"location":"GettingStarted/SetupUnityProject/#unity-installation","title":"Unity installation","text":"

Info

AWSIM's Unity version is currently 2021.1.7f1

Follow the steps below to install Unity on your machine:

  1. Install UnityHub to manage Unity projects. Please go to Unity download page and download latest UnityHub.AppImage.
  2. Install Unity 2021.1.7f1 via UnityHub. - Open new terminal, navigate to directory where UnityHub.AppImage is download and execute the following command:
    ./UnityHub.AppImage\n
    - To install Unity Editor please proceed as shown on the images below - At this point, your Unity installation process should have started.
      === \"Ubuntu 22\"\n  - *NOTE: If the installation process has not started after clicking the green button (image above), please copy the hyperlink (by rightclicking the button and selecting `Copy link address`) and add it as a argument for Unity Hub app. An example command:\n  ```\n  ./UnityHub.AppImage unityhub://2021.1.7f1/d91830b65d9b\n  ```\n

    - After successful installation the version will be available under the Installs tab in Unity Hub.

"},{"location":"GettingStarted/SetupUnityProject/#open-awsim-project","title":"Open AWSIM project","text":"

To open the Unity AWSIM project in Unity Editor:

Using Unity HubUsing Terminal
  1. Make sure you have the AWSIM repository cloned and ROS 2 is not sourced.

    git clone git@github.com:autowarefoundation/AWSIM.git\n
  2. Launch UnityHub.

    ./UnityHub.AppImage\n

    Info

    If you are launching the Unity Hub from the Ubuntu applications menu (without the terminal), make sure that system optimizations are set. To be sure, run the terminal at least once before running the Unity Hub. This will apply the OS settings.

  3. Open the project in UnityHub - Click the Open button

    • Navigate the directory where the AWSIM repository was cloned to
    • The project should be added to Projects tab in Unity Hub. To launch the project in Unity Editor simply click the AWSIM item
    • The project is now ready to use
  1. Enter the AWSIM directory (make sure ROS 2 is not sourced).

    cd AWSIM\n
  2. If your Unity Editor is in default location, run the project using the editor command.

    ~/Unity/Hub/Editor/2021.1.7f1/Editor/Unity -projectPath .\n

    Info

    If your Unity Editor is installed in different location, please adjust the path accordingly.

Warning

If you get the safe mode dialog when starting UnityEditor, you may need to install openssl.

  1. download libssl $ wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5.13_amd64.deb
  2. install sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu5.13_amd64.deb
"},{"location":"GettingStarted/SetupUnityProject/#import-external-packages","title":"Import external packages","text":"

To properly run and use AWSIM project in Unity it is required to download map package which is not included in the repository.

  1. Download and import Nishishinjuku_URP_v0.1.0.unitypackage

    Download Map Package

  2. In Unity Editor, from the menu bar at the top, select Assets -> Import Package -> Custom Package... and navigate the Nishishinjuku_urp.unitypackage file.

  3. Nishishinjuku package has been successfully imported under Assets/AWSIM/Externals/directory.

Info

The Externals directory is added to the .gitignore because the map has a large file size and should not be directly uploaded to the repository.

"},{"location":"GettingStarted/SetupUnityProject/#import-graphy-asset","title":"Import Graphy Asset","text":"

Import Graphy by following these instructions: Graphy Asset Setup

"},{"location":"GettingStarted/SetupUnityProject/#run-the-demo-in-editor","title":"Run the demo in Editor","text":"

The following steps describe how to run the demo in Unity Editor:

  1. Open the AutowareSimulation.unity scene placed under Assets/AWSIM/Scenes/Main directory
  2. Run the simulation by clicking Play button placed at the top section of Editor.
"},{"location":"GettingStarted/UsingOpenSCENARIO/","title":"Using OpenSCENARIO","text":""},{"location":"GettingStarted/UsingOpenSCENARIO/#using-openscenario","title":"Using OpenSCENARIO","text":"

Warning

Running AWSIM with scenario_simulator_v2 is still a prototype, so stable running is not guaranteed.

Below you can find instructions on how to setup the OpenSCENARIO execution using scenario_simulator_v2 with AWSIM as a simulator The instruction assumes using the Ubuntu OS.

"},{"location":"GettingStarted/UsingOpenSCENARIO/#prerequisites","title":"Prerequisites","text":"

Follow Setup Unity Project tutorial

"},{"location":"GettingStarted/UsingOpenSCENARIO/#build-autoware-with-scenario_simulator_v2","title":"Build Autoware with scenario_simulator_v2","text":"

In order to configure the Autoware software with the AWSIM demo, please:

  1. Clone RobotecAI's Autoware and move to the directory.
    git clone git@github.com:RobotecAI/autoware-1.git\ncd autoware\n
  2. Check out to the awsim-ss2-stable branch
    git checkout awsim-ss2-stable\n
  3. Configure the environment. (Skip if Autoware environment has been configured before)
    ./setup-dev-env.sh\n
  4. Create the src directory and clone external dependent repositories into it.
    mkdir src\nvcs import src < autoware.repos\nvcs import src < simulator.repos\n
  5. Download shinjuku_map.zip archive

  6. Unzip it to src/simulator directory

    unzip <Download directory>/shinjuku_map.zip -d src/simulator\n
  7. Install dependent ROS packages.
    source /opt/ros/humble/setup.bash\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
  8. Build the workspace.
    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=\"-w\"\n
"},{"location":"GettingStarted/UsingOpenSCENARIO/#running-the-demo","title":"Running the demo","text":"
  1. Download AWSIM_v1.2.0_ss2.zip & Run archive

  2. Launch scenario_test_runner.

    source install/setup.bash\nros2 launch scenario_test_runner scenario_test_runner.launch.py                        \\\narchitecture_type:=awf/universe  record:=false                                         \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim.yaml'          \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                          \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\" \\\ninitialize_duration:=260 port:=8080\n
"},{"location":"GettingStarted/UsingOpenSCENARIO/#troubleshooting","title":"Troubleshooting","text":"

In case of problems, make sure that the regular demo work well with the Autoware built above. Follow the troubleshooting page there if necessary.

"},{"location":"GettingStarted/UsingOpenSCENARIO/#appendix","title":"Appendix","text":""},{"location":"Introduction/AWSIM/","title":"AWSIM Labs","text":""},{"location":"Introduction/AWSIM/#awsim-labs","title":"AWSIM Labs","text":"

AWSIM Labs is a fork of TIER IV/AWSIM, an open-source simulator made with Unity for autonomous driving research and development. It is developed for self-driving software like Autoware. This simulator aims to bridge the gap between the virtual and real worlds, enabling users to train and evaluate their autonomous systems in a safe and controlled environment before deploying them on real vehicles. It provides a realistic virtual environment for training, testing, and evaluating various aspects of autonomous driving systems.

AWSIM simulates a variety of real-world scenarios, with accurate physics and sensor models. It offers a wide range of sensors, such as: Cameras, GNSS, IMU and LiDARs, allowing developers to simulate their autonomous vehicle's interactions with the environment accurately. The simulator also models dynamic objects, such as pedestrians, other vehicles, and traffic lights, making it possible to study interactions and decision-making in complex traffic scenarios. This enables the testing and evaluation of perception, planning, and control algorithms under different sensor configurations and scenarios.

AWSIM supports a flexible and modular architecture, making it easy to customize and extend its capabilities. Users can modify the current or add a new environment with their own assets and traffic rules to create custom scenarios to suit their specific research needs. This allows for the development and testing of advanced algorithms in diverse driving conditions.

Because AWSIM was developed mainly to work with Autoware, it supports:

Prerequisites

You can read more about the prerequisites and running AWSIM here.

Connection with Autoware

Introduction about how the connection between AWSIM and Autoware works can be read here.

"},{"location":"Introduction/AWSIM/#why-was-awsim-developed","title":"Why was AWSIM developed?","text":"

The main objectives of AWSIM are to facilitate research and development in autonomous driving, enable benchmarking of algorithms and systems, and foster collaboration and knowledge exchange within the autonomous driving community. By providing a realistic and accessible platform, AWSIM aims to accelerate the progress and innovation in the field of autonomous driving.

"},{"location":"Introduction/AWSIM/#architecture","title":"Architecture","text":"

To describe the architecture of AWSIM, first of all, it is necessary to mention the Scene. It contains all the objects occurring in the simulation of a specific scenario and their configurations. The default AWSIM scene that is developed to work with Autoware is called AutowareSimulation.

In the scene we can distinguish basics components such like MainCamera, ClockPublisher, EventSystem and Canvas. A detailed description of the scene and its components can be found here.

Besides the elements mentioned above, the scene contains two more, very important and complex components: Environment and EgoVehicle - described below.

"},{"location":"Introduction/AWSIM/#environment","title":"Environment","text":"

Environment is a component that contains all Visual Elements that simulate the environment in the scene and those that provide control over them. It also contains two components Directional Light and Volume, which ensure suitable lighting for Visual Elements and simulate weather conditions. A detailed description of these components can be found here.

In addition to Visual Elements such as buildings or greenery, it contains the entire architecture responsible for traffic. The traffic involves NPCVehicles that are spawned in the simulation by TrafficSimulator - using traffic components. A quick overview of the traffic components is provided below, however, you can read their detailed description here.

NPCPedestrians are also Environment components, but they are not controlled by TrafficSimulator. They have added scripts that control their movement - you can read more details here.

"},{"location":"Introduction/AWSIM/#traffic-components","title":"Traffic Components","text":"

TrafficLanes and StopLines are elements loaded into Environment from Lanelet2. TrafficLanes have defined cross-references in such a way as to create routes along the traffic lanes. In addition, each TrafficLane present at the intersection has specific conditions for yielding priority. TrafficSimulator uses TrafficLanes to spawn NPCVehicles and ensure their movement along these lanes. If some TrafficLanes ends just before the intersection, then it has a reference to StopLine. Each StopLine at the intersection with TrafficLights has reference to the nearest TrafficLight. TrafficLights belong to one of the visual element groups and provide an interface to control visual elements that simulate traffic light sources (bulbs). A single TrafficIntersection is responsible for controlling all TrafficLights at one intersection. Detailed description of mentioned components is in this section.

"},{"location":"Introduction/AWSIM/#egovehicle","title":"EgoVehicle","text":"

EgoVehicle is a component responsible for simulating an autonomous vehicle moving around the scene. It includes:

A detailed description of EgoVehicle and its components mentioned above can be found here. The sensor placement on EgoVehicle used in the default scene is described here. Details about each of the individual sensors are available in the following sections: Pose, GNSS, LiDAR, IMU, Camera, Vehicle Status.

"},{"location":"Introduction/AWSIM/#fixedupdate-limitation","title":"FixedUpdate Limitation","text":"

In AWSIM, the sensors' publishing methods are triggered from the FixedUpdate function and the output frequency is controlled by:

time += Time.deltaTime;\nvar interval = 1.0f / OutputHz;\ninterval -= 0.00001f; // Allow for accuracy errors.\nif (time < interval)\n    return;\ntimer = 0;\n

Since this code runs within the FixedUpdate method, it's essential to note that Time.deltaTime is equal to Fixed Timestep, as stated in the Unity Time.deltaTime documentation. Consequently, with each invocation of FixedUpdate, the time variable in the sensor script will increment by a constant value of Fixed Timestep, independent of the actual passage of real-time. Additionally, as outlined in the Unity documentation, the FixedUpdate method might execute multiple times before the Update method is called, resulting in extremely small time intervals between successive FixedUpdate calls. The diagram below illustrates the mechanism of invoking the FixedUpdate event function.\"

During each frame (game tick) following actions are performed:

As a consequence, this engine feature may result in unexpected behavior when FPS (Frames Per Second) are unstable or under certain combinations of FPS, Fixed Timestep, and sensor OutputHz

In case of low frame rates, it is advisable to reduce the Time Scale of the simulation. The Time Scale value impacts simulation time, which refers to the time that is simulated within the model and might or might not progress at the same rate as real-time. Therefore, by reducing the time scale, the progression of simulation time slows down, allowing the simulation more time to perform its tasks.

"},{"location":"Introduction/Autoware/","title":"Autoware","text":""},{"location":"Introduction/Autoware/#autoware","title":"Autoware","text":"

Autoware is an open-source software platform specifically designed for autonomous driving applications. It was created to provide a comprehensive framework for developing and testing autonomous vehicle systems. Autoware offers a collection of modules and libraries that assist in various tasks related to perception, planning, and control, making it easier for researchers and developers to build autonomous driving systems.

The primary purpose of Autoware is to enable the development of self-driving technologies by providing a robust and flexible platform. It aims to accelerate the research and deployment of autonomous vehicles by offering a ready-to-use software stack. Autoware focuses on urban driving scenarios and supports various sensors such as LiDAR, Radars, and Cameras, allowing for perception of the vehicle's surroundings.

"},{"location":"Introduction/Autoware/#why-use-awsim-with-autoware","title":"Why use AWSIM with Autoware?","text":"

Autoware can be used with a AWSIM for several reasons. Firstly, simulators like AWSIM provide a cost-effective and safe environment for testing and validating autonomous driving algorithms before deploying them on real vehicles. Autoware's integration with a simulator allows developers to evaluate and fine-tune their algorithms without the risk of real-world accidents or damage.

Additionally, simulators enable developers to recreate complex driving scenarios, including difficult conditions or rare events, which may be difficult to replicate in real-world testing with such high fidelity. Autoware's compatibility with a AWSIM allows seamless integration between the software and the simulated vehicle, enabling comprehensive testing and validation of autonomous driving capabilities. By utilizing a simulator, Autoware can be extensively tested under various scenarios to ensure its robustness and reliability.

Connection with Autoware

Introduction about how the connection between AWSIM and Autoware works can be read here.

"},{"location":"Introduction/Autoware/#architecture","title":"Architecture","text":"

In terms of architecture, Autoware follows a modular approach. It consists of multiple independent modules that communicate with each other through a ROS2. This modular structure allowing users to select and combine different modules based on their specific needs and requirements. The software stack comprises multiple components, including perception, localization, planning, and control modules. Here's a brief overview of each module:

"},{"location":"Introduction/CombinationWithAutoware/","title":"CombinationWithAutoware","text":"

Autoware is a powerful open-source software platform for autonomous driving. Its modular architecture, including perception, localization, planning, and control modules, provides a comprehensive framework for developing self-driving vehicles. Autoware combined with AWSIM simulator provides safe testing, validation, and optimization of autonomous driving algorithms in diverse scenarios.

Run with Autoware

If you would like to know how to run AWSIM with Autoware, we encourage you to read this section.

"},{"location":"Introduction/CombinationWithAutoware/#features","title":"Features","text":"

The combination of Autoware and AWSIM provides the opportunity to check the correctness of the vehicle's behavior in various traffic situations. Below are presented some typical features provided by this combination. Moreover, examples of detecting several bad behaviors are included.

"},{"location":"Introduction/CombinationWithAutoware/#engagement","title":"Engagement","text":" "},{"location":"Introduction/CombinationWithAutoware/#traffic-light-recognition","title":"Traffic light recognition","text":" "},{"location":"Introduction/CombinationWithAutoware/#interaction-with-vehicles","title":"Interaction with vehicles","text":" "},{"location":"Introduction/CombinationWithAutoware/#interaction-with-pedestrians","title":"Interaction with pedestrians","text":" "},{"location":"Introduction/CombinationWithAutoware/#detecting-bad-behaviors","title":"Detecting bad behaviors","text":" "},{"location":"Introduction/CombinationWithAutoware/#combination-architecture","title":"Combination Architecture","text":"

The combination of AWSIM with Autoware is possible thanks to Vehicle Interface and Sensing modules of Autoware architecture. The component responsible for ensuring connection with these modules from the AWSIM side is EgoVehicle. It has been adapted to the Autoware architecture and provides ROS2 topic-based communication. However, the other essential component is ClockPublisher, which provides simulation time for Autoware - also published on the topic - more details here.

EgoVehicle component provides the publication of the current vehicle status through a script working within Vehicle Status. It provides real-time information such as: current speed, current steering of the wheels or current states of lights - these are outputs from AWSIM.

On the other hand, Vehicle Ros Input is responsible for providing the values of the outputs from Autoware. It subscribes to the current commands related to the given acceleration, gearbox gear or control of the specified lights.

Execution of the received commands is possible thanks to Vehicle, which ensures the setting of appropriate accelerations on the **Wheel and controlling the visual elements of the vehicle.

The remaining data delivered from AWSIM to Autoware are sensors data, which provides information about the current state of the surrounding environment and those necessary to accurately estimate EgoVehicle position.

More about EgoVehicle and its scripts is described in this section.

"},{"location":"Introduction/CombinationWithAutoware/#sequence-diagram","title":"Sequence diagram","text":"

Below is a simplified sequential diagram of information exchange in connection between AWSIM and Autoware. As you can see, the first essential information published from AWSIM is Clock - the simulation time. Next, EgoVehicle is spawned and first sensors data are published, which are used in the process of automatic position initialization on Autoware side. At the same time, the simulation on AWSIM side is updated.

Next in the diagram is the main information update loop in which:

The order of information exchange presented in the diagram is a simplification. The exchange of information takes place through the publish-subscribe model and each data is sent with a predefined frequency.

"},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/","title":"Combination with Autoware and Scenario simulator v2","text":""},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/#combination-with-autoware-and-scenario-simulator-v2","title":"Combination with Autoware and Scenario simulator v2","text":"

Scenario Simulator v2 (SS2) is a scenario testing framework specifically developed for Autoware, an open-source self-driving software platform. It serves as a tool for Autoware developers to conveniently create and execute scenarios across different simulators.

The primary goal of SS2 is to provide Autoware developers with an efficient means of writing scenarios once and then executing them in multiple simulators. By offering support for different simulators and scenario description formats, the framework ensures flexibility and compatibility.

The default scenario format in this framework is TIER IV Scenario Format version 2.0. The scenario defined on this format is converted by scenario_test_runner to openSCENARIO format, which is then interpreted by openscenario_interpreter. Based on this interpretation, traffic_simulator simulates traffic flow in an urban area. Each NPC has a behavior tree and executes commands from the scenario.

The framework uses ZeroMQ Inter-Process communication for seamless interaction between the simulator and the traffic_simulator. To ensure synchronous operation of the simulators, SS2 utilizes the Request/Reply sockets provided by ZeroMQ and exchanges binarized data through Protocol Buffers. This enables the simulators to run in a synchronized manner, enhancing the accuracy and reliability of scenario testing.

QuickStart Scenario simulator v2 with Autoware

If you would like to see how SS2 works with Autoware using default build-in simulator - simple_sensor_simulator (without running AWSIM) - we encourage you to read this tutorial.

"},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/#combination-architecture","title":"Combination Architecture","text":"

AWSIM scene architecture used in combination with SS2 changes considerably compared to the default scene. Here traffic_simulator from SS2 replaces TrafficSimulator implementation in AWSIM - for this reason it and its StopLines, TrafficLanes and TrafficIntersection components are removed. Also, NPCPedestrian and NPCVehicles are not added as aggregators of NPCs in Environment.

Instead, their counterparts are added in ScenarioSimulatorConnector object that is responsible for spawning Entities of the scenario. Entity can be: Pedestrian, Vehicle, MiscObject and Ego. EgoEntity is the equivalent of EgoVehicle - which is also removed from the default scene. However, it has the same components - it still communicates with Autoware as described here. So it can be considered that EgoVehicle has not changed and NPCPedestrians and NPCVehicles are now controlled directly by the SS2.

A detailed description of the SS2 architecture is available here. A description of the communication via ROS2 between SS2 and Autoware can be found here.

"},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/#sequence-diagram","title":"Sequence diagram","text":"

In the sequence diagram, the part responsible for AWSIM communication with Autoware also remained unchanged. The description available here is the valid description of the reference shown in the diagram below.

Communication between SS2 and AWSIM takes place via Request-Response messages, and is as follows:

  1. Launch - Autoware is started and initialized.
  2. Initialize - the environment in AWSIM is initialized, basic parameters are set.
  3. opt Ego spawn - optional, EgoEntity (with sensors) is spawned in the configuration defined in the scenario.
  4. opt NPC spawn loop - optional, all Entities (NPCs) defined in the scenario are spawned, the scenario may contain any number of each Entity type, it may not contain them at all or it may also be any combination of the available ones.
  5. update loop - this is the main loop where scenario commands are executed, first EgoEntity is updated - SS2 gets its status, and then every other Entity is updated - the status of each NPCs is set according to the scenario. Next, the simulation frame is updated - here the communication between Autoware and AWSIM takes place. The last step of the loop is to update the traffic light state.
  6. despawn loop - after the end of the scenario, all Entities spawned on the scene are despawned (including EgoEnity)
  7. Terminate - Autoware is terminated.

Documentation of the commands used in the sequence is available here.

"},{"location":"ProjectGuide/Directory/","title":"Directory","text":""},{"location":"ProjectGuide/Directory/#directory","title":"Directory","text":"

AWSIM has the following directory structure. Mostly they are grouped by file type.

AWSIM       //  root directory.\n \u2502\n \u2502\n \u251c\u2500Assets                           // Unity project Assets directory.\n \u2502  \u2502                               // Place external libraries\n \u2502  \u2502                               // under this directory.\n \u2502  \u2502                               // (e.g. RGLUnityPlugin, ROS2ForUnity, etc..)\n \u2502  \u2502\n \u2502  \u2502\n \u2502  \u251c\u2500AWSIM                         // Includes assets directly related to AWSIM\n \u2502  |                               // (Scripts, Prefabs etc.)\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Externals                  // Place for large files or\n \u2502  \u2502  |                            // external project dependencies\n \u2502  \u2502  |                            // (e.g. Ninshinjuku map asset).\n \u2502  \u2502  \u2502                            // The directory is added to `.gitignore`\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500HDRPDefaultResources       // Unity HDRP default assets.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Materials                  // Materials used commonly in Project.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Models                     // 3D models\n \u2502  \u2502  \u2502  \u2502                         // Textures and materials for 3D models\n \u2502  \u2502  \u2502  \u2502                         // are also included.\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2514\u2500<3D Model>              // Directory of each 3D model.\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u251c\u2500Materials            // Materials used in 3D model.\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u2514\u2500Textures             // Textures used in 3D model.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Prefabs                    // Prefabs not dependent on a specific scene.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Scenes                     // Scenes\n \u2502  \u2502  \u2502  \u2502                         // Includes scene-specific scripts, etc.\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u251c\u2500Main                    // Scenes used in the simulation.\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2514\u2500Samples                 // Sample Scenes showcasing components.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2514\u2500Scripts                    // C# scripts.\n \u2502  \u2502\n \u2502  \u2502\n \u2502  \u251c\u2500RGLUnityPlugin        // Robotec GPU LiDAR external Library.\n \u2502  \u2502                       // see: https://github.com/RobotecAI/RobotecGPULidar\n \u2502  \u2502\n \u2502  \u2502\n \u2502  \u2514\u2500Ros2ForUnity          // ROS2 communication external Library.\n \u2502                          // see: https://github.com/RobotecAI/ros2-for-unity\n \u2502\n \u251c\u2500Packages         // Unity automatically generated directories.\n \u251c\u2500ProjectSettings  //\n \u251c\u2500UserSettings     //\n \u2502\n \u2502\n \u2514\u2500docs             // AWSIM documentation. Generated using mkdocs.\n                    // see: https://www.mkdocs.org/\n
"},{"location":"ProjectGuide/ExternalLibraries/","title":"External Libraries","text":""},{"location":"ProjectGuide/ExternalLibraries/#external-libraries","title":"External Libraries","text":"

List of external libraries used in AWSIM.

Library Usage URL ros2-for-unity ROS2 communication https://github.com/RobotecAI/ros2-for-unity Robtoec-GPU-LiDAR LiDAR simulation https://github.com/RobotecAI/RobotecGPULidar"},{"location":"ProjectGuide/GitBranch/","title":"Git Branch","text":""},{"location":"ProjectGuide/GitBranch/#git-branch","title":"Git branch","text":"

The document presents the rules of branching adopted in the AWSIM development process.

"},{"location":"ProjectGuide/GitBranch/#branches","title":"Branches","text":"branch explain main Stable branch. Contains all the latest releases. feature/*** Feature implementation branch created from main. After implementation, it is merged into main. gh-pages Documentation hosted on GitHub pages."},{"location":"ProjectGuide/GitBranch/#branch-flow","title":"Branch flow","text":"
  1. Create feature/*** branch from main.
  2. Implement in feature/*** branch.
  3. Create a PR from the feature/*** branch to main branch. Merge after review.
"},{"location":"ProjectGuide/HotkeyList/","title":"Hotkey List","text":""},{"location":"ProjectGuide/HotkeyList/#hotkey-list","title":"Hotkey List","text":""},{"location":"ProjectGuide/HotkeyList/#vehiclekeyboardinputcs","title":"VehicleKeyboardInput.cs","text":"Key Feature D Change drive gear. P parking gear. R Reverse gear. N Neutral gear. 1 Left turn signal. 2 Right turn signal. 3 Hazard. 4 Turn signal off. Up arrow Accelerate. Left arrow Steering (Left). Right arrow Steering (Right). Down arrow Breaking.

W,A,S,D keys can also be used to control the vehicle, similar to the arrow keys.

"},{"location":"ProjectGuide/HotkeyList/#followcameracs","title":"FollowCamera.cs","text":"Key Feature C Camera rotation on/off toggle. Mouse drag Rotate Camera angle. Mouse wheel Zoom in/out of camera."},{"location":"ProjectGuide/HotkeyList/#hotkeyhandlercs","title":"HotkeyHandler.cs","text":"Key Feature Esc Toggle main menu Ctrl + R Reset ego vehicle"},{"location":"ProjectGuide/Scenes/","title":"Scenes","text":"

In the AWSIM Unity project there is one main scene (AutowareSimulation) and several additional ones that can be helpful during development. This section describes the purpose of each scene in the project.

"},{"location":"ProjectGuide/Scenes/#autowaresimulation","title":"AutowareSimulation","text":"

The AutowareSimulation scene is the main scene that is designed to run the AWSIM simulation together with Autoware. It allows for effortless operation, just run this scene, run Autoware with the correct map file and everything should work right out of the box.

"},{"location":"ProjectGuide/Scenes/#pointcloudmapping","title":"PointCloudMapping","text":"

The PointCloudMapping is a scene that is designed to create a point cloud using the Unity world. Using the RGLUnityPlugin and prefab Environment - on which there are models with Meshes - we are able to obtain a *.pcd file of the simulated world.

"},{"location":"ProjectGuide/Scenes/#sensorconfig","title":"SensorConfig","text":"

Scene SensorConfig was developed to perform a quick test of sensors added to the EgoVehicle prefab. Replace the Lexus prefab with a vehicle prefab you developed and check whether all data that should be published is present, whether it is on the appropriate topics and whether the data is correct.

"},{"location":"ProjectGuide/Scenes/#npcvehiclesample","title":"NPCVehicleSample","text":"

The NPCVehicleSample was developed to conduct a quick test of the developed vehicle. Replace the taxi prefab with a vehicle prefab you developed (EgoVehicle or NPCVehicle) and check whether the basic things are configured correctly. The description of how to develop your own vehicle and add it to the project is in this section.

"},{"location":"ProjectGuide/Scenes/#npcpedestriansample","title":"NPCPedestrianSample","text":"

The NPCPedestrianSample was developed to conduct a quick test of the developed pedestrian. Replace the NPC prefab in NPC Pedestrian Test script with a prefab you developed and check whether the basic things are configured correctly.

"},{"location":"ProjectGuide/Scenes/#trafficintersectionsample","title":"TrafficIntersectionSample","text":"

The TrafficIntersectionSample was developed to conduct a quick test of the developed traffic intersection. Replace the intersection configuration with your own and check whether it works correctly. You can add additional groups of lights and create much larger, more complex sequences. A description of how to configure your own traffic intersection is in this section.

"},{"location":"ProjectGuide/Scenes/#trafficlightsample","title":"TrafficLightSample","text":"

The TrafficLightSample was developed to conduct a quick test of a developed traffic lights model in cooperation with the script controlling it. Replace the lights and configuration with your own and check whether it works correctly.

"},{"location":"ProjectGuide/Scenes/#randomtrafficyielding","title":"RandomTrafficYielding","text":"

The RandomTrafficYielding was developed to conduct a tests of a developed yielding rules at the single intersection.

"},{"location":"ProjectGuide/Scenes/#randomtrafficyieldingbirdeye","title":"RandomTrafficYieldingBirdEye","text":"

The RandomTrafficYielding was developed to conduct a tests of a developed yielding rules with multiple vehicles moving around the entire environment.

"},{"location":"ProjectGuide/Scenes/#rgl-test-scenes","title":"RGL test scenes","text":"

The scenes described below are used for tests related to the external library RGLUnityPlugin (RGL) - you can read more about it in this section.

"},{"location":"ProjectGuide/Scenes/#lidarscenedevelop","title":"LidarSceneDevelop","text":"

The scene LidarSceneDevelop can be used as a complete, minimalistic example of how to setup RGL. It contains RGLSceneManager component, four lidars, and an environment composed of floor and walls.

"},{"location":"ProjectGuide/Scenes/#lidarskinnedstress","title":"LidarSkinnedStress","text":"

The scene LidarSkinnedStress can be used to test the performance of RGL. E.g. how performance is affected when using Regular Meshes compared to Skinned Meshes. The scene contains a large number of animated models that require meshes to be updated every frame, thus requiring more resources (CPU and data exchange with GPU).

"},{"location":"ProjectGuide/Scenes/#lidardisablingtest","title":"LidarDisablingTest","text":"

The scene LidarDisablingTest can be used to test RGL performance with similar objects but with different configurations. It allows you to check whether RGL works correctly when various components that can be sources of Meshes are disabled (Colliders, Regular Meshes, Skinned Meshes, ...).

"},{"location":"ProjectGuide/Scenes/#lidarinstancesegmentationdemo","title":"LidarInstanceSegmentationDemo","text":"

The LidarInstanceSegmentationDemo is a demo scene for instance segmentation feature. It contains a set of GameObjects with ID assigned and sample lidar that publishes output to the ROS2 topic. The GameObjects are grouped to present different methods to assign IDs.

To run demo scene:

  1. Open scene: Assets/AWSIM/Scenes/Samples/LidarInstanceSegmentationDemo.unity
  2. Run simulation
  3. Open rviz2
  4. Setup rviz2 as follows: - Fixed frame: world, - PointCloud2 topic: lidar/instance_id, - Topic QoS as in the screen above. - Channel name: enitity_id, - To better visualization disable Autocompute intensity and set min to 0 and max to 50.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#welcome-to-awsim-labs","title":"Welcome to AWSIM Labs","text":"

AWSIM Labs is currently being developed under the Autoware Labs initiative. Main purpose of this fork is to provide faster implementation of features needed by the users of the AWSIM while also ensuring a high-performance simulation environment for the Autoware.

This is a fork of TIER IV's AWSIM.

"},{"location":"#features","title":"Features","text":""},{"location":"#feature-differences-from-the-main-awsim","title":"Feature differences from the main AWSIM","text":"AWSIM AWSIM Labs Using HDRP Using URP Using Unity 2021.1.7f1 Using Unity LTS 2022.3.21f1 Limited interaction with simulation and UI Interactable simulation and UI Uses more resources Uses less resources - Multiple scene and vehicle setup"},{"location":"#try-the-simulation-demo-yourself","title":"Try the simulation demo yourself!","text":"

We don't have a release yet. Please build it from the source.

Download AWSIM Demo for Ubuntu

To test the AWSIM Labs demo with Autoware please refer to the Quick start demo section.

"},{"location":"Components/Clock/ClockPublisher/","title":"Clock Publisher","text":""},{"location":"Components/Clock/ClockPublisher/#introduction","title":"Introduction","text":"

ClockPublisher allows the publication of the simulation time from the clock operating within AWSIM. The current time is retrived from a TimeSource object via the SimulatorROS2Node. The AWSIM provides convenient method for selecting the appropriate time source type as well as the flexibility to implement custom TimeSources tailored to specific user requirements.

"},{"location":"Components/Clock/ClockPublisher/#setup","title":"Setup","text":"

To enable the publication of the current time during simulation execution, ClockPublisher must be included as a component within the scene. Moreover, to allow the TimeSource to be set or changed, the TimeSourceSelector object must also be present in the active scene.

"},{"location":"Components/Clock/ClockPublisher/#selecting-time-source","title":"Selecting Time Source","text":"

The desired TimeSource can be selected in two ways:

"},{"location":"Components/Clock/ClockPublisher/#list-of-time-sources","title":"List of Time Sources","text":"Type String Value for JSON Config Description UNITY unity based on the time of the Unity Engine SS2 ss2 driven by an external source, used by the scenario simulator v2 DOTNET_SYSTEM system based on system time, starting with time since UNIX epoch, progressing according to simulation timescale DOTNET_SIMULATION simulation based on system time, starting with zero value, progressing according to simulation timescale ROS2 ros2 based on ROS2 time (system time by default)"},{"location":"Components/Clock/ClockPublisher/#architecture","title":"Architecture","text":"

The ClockPublisher operates within a dedicated thread called the 'Clock' thread. This design choice offers significant advantages by freeing the publishing process from the constraints imposed by fixed update limits. As a result, ClockPublisher is able to consistently publish time at a high rate, ensuring stability and accuracy.

"},{"location":"Components/Clock/ClockPublisher/#accessing-time-source","title":"Accessing Time Source","text":"

Running the clock publisher in a dedicated thread introduced the challenge of accessing shared resources by different threads. In our case, the Main Thread and Clock Thread compete for TimeSoruce resources. The diagram below illustrates this concurrent behaviour, with two distinct threads vying for access to the TimeSource:

Given multiple sensors, each with its own publishing frequency, alongside a clock running at 100Hz, there is a notable competition for TimeSource resources. In such cases, it becomes imperative for the TimeSource class to be thread-safe.

"},{"location":"Components/Clock/ClockPublisher/#thread-safe-time-source","title":"Thread-Safe Time Source","text":"

The TimeSource synchronization mechanism employs a mutex to lock the necessary resource for the current thread. The sequence of actions undertaken each time the GetTime() method is called involves:

"},{"location":"Components/Clock/ClockPublisher/#extensions","title":"Extensions","text":"

There are two additional classes used to synchronise the UnityEngine TimeAsDouble and TimeScale values between threads:

"},{"location":"Components/Environment/AWSIMEnvironment/","title":"AWSIM Environment","text":""},{"location":"Components/Environment/AWSIMEnvironment/#awsim-environment","title":"AWSIM Environment","text":""},{"location":"Components/Environment/AWSIMEnvironment/#introduction","title":"Introduction","text":"

Environment is an object that contains all the elements visible on the scene along with components that affect how they are rendered. It contains several objects aggregating static environment objects in terms of their type. Moreover, it contains elements responsible for controlling random traffic.

Own Environment prefab

If you would like to develop your own prefab Environment for AWSIM, we encourage you to read this tutorial.

AutowareSimulation scene

If you would like to see how Environment with random traffic works or run some tests, we encourage you to familiarize yourself with the AutowareSimulation scene described in this section.

Prefab Environment is also used to create a point cloud (*.pcd file) needed to locate the EgoVehicle in the simulated AWSIM scene. The point cloud is created using the RGL plugin and then used in Autoware. We encourage you to familiarize yourself with an example scene of creating a point cloud - described here.

Create PointCloud (*.pcd file)

If you would like to learn how to create a point cloud in AWSIM using Environment prefab, we encourage you to read this tutorial.

"},{"location":"Components/Environment/AWSIMEnvironment/#architecture","title":"Architecture","text":"

The architecture of an Environment - with dependencies between components - is presented on the following diagram.

"},{"location":"Components/Environment/AWSIMEnvironment/#prefabs","title":"Prefabs","text":"

Prefabs can be found under the following path:

Name Description Path Nishishinjuku Only stationary visual elements, no traffic Assets/AWSIM/Prefabs/Environments/Nishishinjuku.prefab Nishishinjuku RandomTraffic Stationary visual elements along with random traffic Assets/AWSIM/Prefabs/Environments/Nishishinjuku RandomTraffic.prefab Nishishinjuku Traffic Stationary visual elements along with non-random traffic Assets/AWSIM/Prefabs/Environments/Nishishinjuku Traffic.prefab

Environment prefab

Due to the similarity of the above prefabs, this section focuses on prefab Nishishinjuku RandomTraffic. The exact differences between Nishishinjuku RandomTraffic and Nishishinjuku Traffic will be described in the future.

Environment name

In order to standardize the documentation, the name Environment will be used in this section as the equivalent of the prefab named Nishishinjuku RandomTraffic.

Nishishinjuku RandomTraffic prefab has the following content:

As you can see it contains:

All of these objects are described below in this section.

"},{"location":"Components/Environment/AWSIMEnvironment/#visual-elements","title":"Visual elements","text":"

Nishishinjuku RandomTraffic prefab contains many visual elements which are described here.

"},{"location":"Components/Environment/AWSIMEnvironment/#link-in-the-default-scene","title":"Link in the default Scene","text":"

Nishishinjuku RandomTraffic prefab is added to the Environment object - between which there is rotation about the Oy axis by 90 degrees. This rotation is added because of the differences in coordinate alignments between the Nishishinjuku RandomTraffic prefab objects (which have been modeled as *.fbx files) and the specifics of the GridZone definition (more on this is described here).

Object Environment is added to AutowareSimulation which is added directly to the main parent of the scene - there are no transformations between these objects.

"},{"location":"Components/Environment/AWSIMEnvironment/#components","title":"Components","text":"

Nishishinjuku RandomTraffic (Environment) prefab contains only one component:

"},{"location":"Components/Environment/AWSIMEnvironment/#layers","title":"Layers","text":"

In order to enable the movement of vehicles around the environment, additional layers have been added to the project: Ground and Vehicle.

All objects that are acting as a ground for NPCVehicles and EgoVehicle to move on have been added to Ground layer - they cannot pass through each other and should collide for the physics engine to calculate their interactions.

For this purpose, NPCVehicles and EgoVehicle have been added to the Vehicle layer.

In the project physics settings, it is ensured that collisions between objects in the Vehicle layer are disabled (this applies to EgoVehicle and NPCVehicles - they do not collide with each other):

"},{"location":"Components/Environment/AWSIMEnvironment/#traffic-components","title":"Traffic Components","text":"

Due to the specificity of the use of RandomTrafficSimulator, TrafficIntersections, TrafficLanes, StopLines objects, they have been described in a separate section Traffic Components - where all the elements necessary in simulated random traffic are presented.

"},{"location":"Components/Environment/AWSIMEnvironment/#visual-elements-sjk","title":"Visual Elements (SJK)","text":"

The visuals elements have been loaded and organized using the *.fbx files which can be found under the path:

Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_optimized/Models/*\n

Environment prefab contains several objects aggregating stationary visual elements of space by their category:

Scene Manager

For models (visual elements) added to the prefab to work properly with the LidarSensor sensor using RGL, make sure that the SceneManager component is added to the scene - more about it is described in this section.

In the scene containing Nishishinjuku RandomTrafficprefab Scene Manager (script) is added as a component to the AutowareSimulation object containing the Environment.

"},{"location":"Components/Environment/AWSIMEnvironment/#trafficlights","title":"TrafficLights","text":"

TrafficLights are a stationary visual element belonging to the SJK01_P03 group. The lights are divided into two types, the classic TrafficLights used by vehicles at intersections and the PedestrianLights found at crosswalks.

Classic traffic lights are aggregated at object TrafficLightA01_Root01_ALL_GP01

while lights used by pedestrians are aggregated at object TrafficLightB01_Root01_All_GP01.

TrafficLights and PedestrianLights are developed using models available in the form of *.fbx files, which can be found under the following path: Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Models/*

"},{"location":"Components/Environment/AWSIMEnvironment/#classic-trafficlights","title":"Classic TrafficLights","text":"

TrafficLights lights, outside their housing, always contain 3 signaling light sources of different colors - from left to right: green, yellow, red. Optionally, they can have additional sources of signaling the ability to drive in a specific direction in the form of one or three signaling arrows.

In the environment there are many classic lights with different signaling configurations. However, each contains:

"},{"location":"Components/Environment/AWSIMEnvironment/#materials","title":"Materials","text":"

An important element that is configured in the TrafficLights object are the materials in the Mesh Renderer component. Material with index 0 always applies to the housing of the lights. Subsequent elements 1-6 correspond to successive slots of light sources (round luminous objects) - starting from the upper left corner of the object in the right direction, to the bottom and back to the left corner. These indexes are used in script Traffic Light (script) - described here.

Materials for lighting slots that are assigned in Mesh Renderer can be found in the following path: Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Materials/*

"},{"location":"Components/Environment/AWSIMEnvironment/#pedestrianlights","title":"PedestrianLights","text":"

PedestrianLights lights, outside their housing, always contain 2 signaling light sources of different colors - red on top and green on the bottom.

In the environment there are many pedestrian lights - they have the same components as classic TrafficLights, but the main difference is the configuration of their materials.

"},{"location":"Components/Environment/AWSIMEnvironment/#materials_1","title":"Materials","text":"

An important element that is configured in the PedestrianLights object are the materials in the Mesh Renderer component. Material with index 0 always applies to the housing of the lights. Subsequent elements 1-2 correspond to successive slots of light sources (round luminous objects) - starting from top to bottom. These indexes are used in script Traffic Light (script) - described here.

Materials for lighting slots that are assigned in Mesh Renderer can be found in the following path: Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Materials/*

"},{"location":"Components/Environment/AWSIMEnvironment/#volume","title":"Volume","text":"

Volume is GameObject with Volume component which is used in the High Definition Render Pipeline (HDRP). It defines a set of scene settings and properties. It can be either global, affecting the entire scene, or local, influencing specific areas within the scene. Volumes are used to interpolate between different property values based on the Camera's position, allowing for dynamic changes to environment settings such as fog color, density, and other visual effects.

In case of prefab Nishishinjuku RandomTraffic volume works in global mode and has loaded Volume profile. This volume profile has a structure that overrides the default properties of Volume related to the following components: Fog, Shadows, Ambient Occlusion, Visual Environment, HDRI Sky. It can be found in the following path: Assets/AWSIM/Prefabs/Environments/Nishishinjuku/Volume Profile.asset

"},{"location":"Components/Environment/AWSIMEnvironment/#directional-light","title":"Directional Light","text":"

Directional Light is GameObject with Light component which is used in the High Definition Render Pipeline (HDRP). It controls the shape, color, and intensity of the light. It also controls whether or not the light casts shadows in scene, as well as more advanced settings.

In case of prefab Nishishinjuku RandomTraffic a Directional type light is added. It creates effects that are similar to sunlight in scene. This light illuminates all GameObjects in the scene as if the light rays are parallel and always from the same direction. Directional light disregards the distance between the Light itself and the target, so the light does not diminish with distance. The strength of the Light (Intensity) is set to 73123.09 Lux. In addition, a Shadow Map with a resolution of 4096 is enabled, which is updated in Every Frame of the simulation. The transform of the Directional Light object is set in such a way that it shines on the environment almost vertically from above.

"},{"location":"Components/Environment/AWSIMEnvironment/#npcpedestrians","title":"NPCPedestrians","text":"

NPCPedestrians is an aggregating object for NPCPedestrian objects placed in the environment. Prefab Nishishinjuku RandomTraffic has 7 NPCPedestrian (humanElegant) prefabs defined in selected places. More about this NPCPedestrian prefab you can read in this section.

"},{"location":"Components/Environment/AWSIMEnvironment/#environment-script","title":"Environment (script)","text":"

Environment (script) contains the information about how a simulated Environment is positioned in real world. That means it describes what is the real world position of a simulated Environment.

AWSIM uses part of a Military Grid Reference System (MGRS). To understand this topic, you only need to know, that using MGRS you can specify distinct parts of the globe with different accuracy. For AWSIM the chosen accuracy is a 100x100 km square. Such a square is identified with a unique code like 54SUE (for more information on Grid Zone please see this page).

Inside this Grid Zone the exact location is specified with the offset calculated from the bottom-left corner of the Grid Zone. You can interpret the Grid Zone as a local coordinate system in which you position the Environment.

In the Nishishinjuku RandomTraffic prefab, the simulated Environment is positioned in the Grid Zone 54SUE. The offset if equal to 81655.73 meters in the Ox axis, 50137.43 meters in the Oy axis and 42.49998 meters in the Oz axis. In addition to this shift, it is also necessary to rotate the Environment in the scene by 90 degrees about the Oy axis - this is ensured by the transform in the prefab object.

This means that the 3D models were created in reference to this exact point and because of that the 3D models of Environment align perfectly with the data from Lanelet2.

The essence of Environment (script)

The Environment (script) configuration is necessary at the moment of loading data from Lanelet2.

Internally it shifts the elements from Lanelet2 by the given offset so that they align with the Environment that is located at the local origin with no offset.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/","title":"Add Environment","text":""},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#add-an-environment","title":"Add an Environment","text":"

Environment is an important part of a Scene in AWSIM. Every aspect of the simulated surrounding world needs to be included in the Environment prefab - in this section you will learn how to develop it. However, first Lanelet2 needs to be developed along with 3D models of the world, which will be the main elements of this prefab.

Tip

If you want to learn more about the Environment at AWSIM, please visit this page.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#create-a-lanelet2","title":"Create a Lanelet2","text":"

Before you start creating Lanelet2, we encourage you to read the documentation to find out what Lanelet2 is all about. Lanelet2 can be created using VectorMapBuilder (VMP) based on the PCD obtained from real-life LiDAR sensor.

When working with the VMP, it is necessary to ensure the most accurate mapping of the road situation using the available elements. Especially important are TrafficLanes created in VMB as connected Road Nodes and StopLines created in VMB as Road Surface Stoplines.

Lanelet2 positioning

Lanelet2 should be created in MGRS coordinates of the real place you are recreating. Please position your Lanelet2 relative to the origin (bottom left corner) of the MGRS Grid Zone with the 100 km Square ID in which the location lays. More details can be read here.

You can think of the Grid Zone as a local coordinate system. Instead of making global (0,0) point (crossing of Equator and Prime Median) our coordinate system origin we take a closer one. The MGRS Grid Zone with 100 km Square ID code designates a 100x100 [kmxkm] square on the map and we take its bottom left corner as our local origin.

Example

Lets examine one node from an example Lanelet2 map:

<node id=\"4\" lat=\"35.68855194431519\" lon=\"139.69142711058254\">\n    <tag k=\"mgrs_code\" v=\"54SUE815501\"/>\n    <tag k=\"local_x\" v=\"81596.1357\"/>\n    <tag k=\"local_y\" v=\"50194.0803\"/>\n    <tag k=\"ele\" v=\"34.137\"/>\n</node>\n

The node with id=\"4\" position is described as absolute coordinates given in the <node>. In this example the coordinates are as follows lat=\"35.68855194431519\" lon=\"139.69142711058254.

It is also described as local transformation defined as a translation relative to the origin of the MGRS Grid Zone with 100 km Square ID (bottom left corner). The MGRS Grid Zone designation with 100 km Square ID in this case is equal to 54SUE. In this example the offset in the X axis is as follows k=\"local_x\" v=\"81596.1357\" and the offset in the Y axis is as follows k=\"local_y\" v=\"50194.0803\".

Note that elevation information is also included.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#create-3d-models","title":"Create 3D models","text":"

You can create 3D models of an Environment as you wish. It is advised however, to prepare the models in form of .fbx files. Additionally you should include materials and textures in separate directories. Many models are delivered in this format. This file format allows you to import models into Unity with materials and replace materials while importing. You can learn more about it here.

You can see a .fbx model added and modified on the fly in the example of this section.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#guidelines","title":"Guidelines","text":"

To improve the simulation performance of a scene containing your Environment prefab, please keep in mind some of these tips when creating 3D models:

  1. Prefer more smaller models over a few big ones.

    In general it is beneficial for performance when you make one small mesh of a object like tree and reuse it on the scene placing many prefabs instead of making one giant mesh containing all trees on the given scene. It is beneficial even in situations when you are not reusing the meshes. Lets say you have a city with many buildings - and every one of those buildings is different - it is still advised to model those building individually and make them separate GameObjects.

  2. Choose texture resolution appropriately.

    Always have in mind what is the target usage of your texture. Avoid making a high resolution texture for a small object or the one that will always be far away from the camera. This way you can save some computing power by not calculating the details that will not be seen because of the screen resolution.

    !!! tip \"Practical advice\" You can follow these simple rules when deciding on texture quality (texel density):

      - For general objects choose 512px/m (so the minimum size of texture is 512/512)\n  - For important objects that are close to the camera choose 1024px/m (so the minimum size of texture is 1024/1024)\n
  3. (optional) Add animation.

    Add animations to correct objects. If some element in the 3D model are interactive they should be divided into separate parts.

What's more, consider these tips related directly to the use of 3D models in AWSIM:

  1. Creating a 3D model based on actual point cloud data makes it more realistic.
  2. AWSIM is created using HDRP (High Definition Rendering Pipeline) which performs better when object meshes are merged.
  3. Occlusion culling and flutter culling cannot be used because the sensors detection target will disappear.
  4. Each traffic light should have a separate GameObject. Also, each light in the traffic light should be split into separate materials.
"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#create-an-environment-prefab","title":"Create an Environment prefab","text":"

In this part, you will learn how to create a Environment prefab - that is, develop a GameObject containing all the necessary elements and save it as a prefab.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#1-add-3d-models","title":"1. Add 3D models","text":"

In this section we will add roads, buildings, greenery, signs, road markings etc. to our scene.

Most often your models will be saved in the .fbx format. If so, you can customize the materials in the imported model just before importing it. Sometimes it is necessary as models come with placeholder materials. You can either

In order to add 3D models from the .fbx file to the Scene please do the following steps:

  1. In the Project view navigate to the directory where the model is located and click on the model file.
  2. Now you can customize the materials used in the model in the Inspector view.
  3. Drag the model into the Scene where you want to position it.
  4. Move the Object in the Hierarchy tree appropriately.
  5. (optional) Now you can save this model configuration as a prefab to easily reuse it. Do this by dragging the Object from the Scene into the Project view. When you get a warning make sure to select you want to create an original new prefab.

Example

An example video of the full process of importing a model, changing the materials, saving new model as a prefab and importing the new prefab.

When creating a complex Environment with many elements you should group them appropriately in the Hierarchy view. This depends on the individual style you like more, but it is a good practice to add all repeating elements into one common Object. E.g. all identical traffic lights grouped in TrafficLights Object. The same can be done with trees, buildings, signs etc. You can group Objects as you like.

Object hierarchy

When adding elements to the Environment that are part of the static world (like 3D models of buildings, traffic lights etc.) it is good practice to collect them in one parent GameObject called Map or something similar.

By doing this you can set a transformation of the parent GameObject Map to adjust the world pose in reference to e.g. loaded objects from Lanelet2.

Remember to unpack

Please remember to unpack all Object added into the scene. If you don't they will change materials together with the .fbx model file as demonstrated in the example below.

This is unwanted behavior. When you import a model and change some materials, but leave the rest default and don't unpack the model, then your instances of this model on the scene may change when you change the original fbx model settings.

See the example below to visualize what is the problem.

Example

In this example we will

Watch what happens, the instance on the Scene changes the materials together with the model. This only happens if you don't unpack the model.

Example Environment after adding 3D models

After completing this step you should have an Environment Object that looks similar to the one presented below.

The Environment with 3D models can look similar to the one presented below.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#2-add-an-environment-script","title":"2. Add an Environment Script","text":"

Add an Environment Script as component in the Environment object (see the last example in section before). It does not change the appearance of the Environment, but is necessary for the simulation to work correctly.

  1. Click on the Add Component button in the Environment object.

  2. Search for Environment and select it.

  3. Set the MGRS to the offset of your Environment as explained in this section.

Info

Due to the differences between VectorMapBuilder and Unity, it may be necessary to set the transform of the Environment object. The transform in Environment should be set in such a way that the TrafficLanes match the modeled roads. Most often it is necessary to set the positive 90 degree rotation over Y axis.

This step should be done after importing items from lanelet2. Only then will you know if you have Environment misaligned with items from lanelet2.

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#3-add-a-directional-light","title":"3. Add a Directional Light","text":"
  1. Create a new child Object of the Environment and name it Directional Light.

  2. Click Add Component button, search for Light and select it.

  3. Change light Type to Directional.

  4. Now you can configure the directional light as you wish. E.g. change the intensity or orientation.

Tip

For more details on lighting check out official Unity documentation.

Example Environment after adding Directional Light

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#4-add-a-volume","title":"4. Add a Volume","text":"
  1. Create a new child object of the Environment and name it Volume.

  2. Click Add Component search for Volume and select it.

  3. Change the Profile to Volume Profile and wait for changes to take effect.

  4. Now you can configure the Volume individually as you wish.

Tip

For more details on volumes checkout official Unity documentation.

Example Environment after adding Volume

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#5-add-npcpedestrians","title":"5. Add NPCPedestrians","text":"
  1. Make NPCPedestrians parent object.

  2. Open Assets/AWSIM/Prefabs/NPCs/Pedestrians in Project view and drag a humanElegant into the NPCPedestrians parent object.

  3. Click Add Component in the humanElegant object and search for Simple Pedestrian Walker Controller Script and select it.

    This is a simple Script that makes the pedestrian walk straight and turn around indefinitely. You can configure pedestrian behavior with 2 parameters.

    - Duration - how long will the pedestrian walk straight - Speed - how fast will the pedestrian walk straight

    !!!tip The Simple Pedestrian Walker Controller Script is best suited to be used on pavements.

  4. Finally position the NPCPedestrian on the scene where you want it to start walking.

    !!! warning Remember to set correct orientation, as the NPCPedestrian will walk straight from the starting position with the starting orientation.

Example Environment after adding NPC Pedestrians

"},{"location":"Components/Environment/AddNewEnvironment/AddEnvironment/#6-save-an-environment-prefab","title":"6. Save an Environment prefab","text":"

After doing all the previous steps and having your Environment finished you can save it to prefab format.

  1. Find an Environments directory in the Project view (Assets/AWSIM/Prefabs/Environments).
  2. Drag the Environment Object into the Project view.
  3. (optional) Change the prefab name to recognize it easily later.

Success

Once you've added the Environment, you need to add and configure TrafficLights. For details please visit this tutorial.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/","title":"Add Random Traffic","text":"

To add a Random Traffic to your scene you need the Random Traffic Simulator Script.

  1. Create a new Game Object as a child of Environment and call it RandomTrafficSimulator.

  2. Click a button Add Component in the Inspector to add a script.

  3. A small window should pop-up. Search for RandomTrafficSimulator script and add it by double clicking it or by pressing enter.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#basic-configuration","title":"Basic Configuration","text":"

After clicking on the newly created RandomTrafficSimulator object in the Scene tree you should see something like this in the Inspector view.

Random Traffic Simulator, as the name suggests, generates traffic based on random numbers. To replicate situations you can set a specific seed.

You can also set Vehicle Layer Mask and Ground Layer Mask. It is important to set these layers correctly, as they are a base for vehicle physics. If set incorrectly the vehicles may fall through the ground into the infinity.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#add-npcvehicles","title":"Add NPCVehicles","text":"

Random Traffic Simulator Script moves, spawns and despawns vehicles based on the configuration. These settings can to be adjusted to your preference.

  1. Setting Max Vehicle Count.

    This parameter sets a limit on how many vehicles can be added to the scene at one time.

  2. NPC Prefabs

    These are models of vehicles that should be spawned on the scene, to add NPC Prefabs please follow these steps:

    1. To do this click on the \"+\" sign and in the new list element at the bottom and click on the small icon on the right to select a prefab.

      <!-- <img src=\"add_npc_prefab1.gif\" alt=\"Add npc list element gif\" width=\"500\"/> -->\n

    2. Change to the Assets tab in the small windows that popped-up.

      <!-- <img src=\"add_npc_prefab2.gif\" alt=\"Select Assets tab gif\" width=\"500\"/> -->\n

    3. Search for the Vehicle prefab you want to add, e.g. Hatchback.

      <!-- <img src=\"add_npc_prefab3.gif\" alt=\"Search a prefab gif\" width=\"500\"/> -->\n

    Available NPC prefabs are shown in the NPC Vehicle section.

    !!! tip \"Control NPC Vehicle spawning\" Random Traffic Simulator Script will on random select one prefab from Npc Prefabs list every time when there are not enough vehicles on the scene (the number of vehicles on the scene is smaller than the number specified in the Max Vehicle Count field).

      You can control the odds of selecting one vehicle prefab over another by adding more than one instance of the same prefab to this list.\n
"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#add-spawnable-lanes","title":"Add spawnable lanes","text":"

Spawnable lanes are the lanes on which new vehicles can be spawned by the Random Traffic Simulator Script. Best practice is to use beginnings of the lanes on the edges of the map as spawnable lanes.

Warning

Make sure you have a lanelet added into your scene. The full tutorial on this topic can be found here.

Adding spawnable lanes is similar to Adding NPC Prefabs.

  1. Add an element to the Spawnable Lanes list by clicking on the \"+\" symbol or by selecting number of lanes directly.

  2. Now you can click on the small icon on the right of the list element and select a Traffic Lane you are interested in.

    Unfortunately all Traffic Lanes have the same names so it can be difficult to know which one to use. Alternatively you can do the following to add a traffic lane by visually selecting it in the editor:

    - Lock RandomTrafficSimulator in the Inspector view.

      <img src=\"add_traffic_lane3.gif\" alt=\"Lock inspector view gif\" width=\"500\"/>\n

    - Select the Traffic Lane you are interested in on the Scene and as it gets highlighted in the Hierarchy view you can now drag and drop this Traffic Lane into the appropriate list element.

      ![Select traffic lane and drag it to the list gif](add_traffic_lane4.gif)\n
"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddRandomTraffic/#vehicles-configuration","title":"Vehicles configuration","text":"

The last thing to configure is the behavior of NPCVehicles. You can specify acceleration rate of vehicles and three values of deceleration.

Question

This configuration is common for all vehicles managed by the Random Traffic Simulator Script.

Success

The last thing that needs to be done for RandomTraffic to work properly is to add intersections with traffic lights and configure their sequences. Details here.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/","title":"Add Traffic Intersection","text":"

Every TrafficIntersection on the scene needs to be added as a GameObject. Best practice is to create a parent object TrafficIntersections and add all instances of TrafficIntersection as its children. You can do this the same as with Random Traffic Simulator.

Traffic Lights configuration

Before performing this step, check all TrafficLights for correct configuration and make sure that TrafficLights have added scripts. If you want to learn how to add and configure it check out this tutorial.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#add-a-box-collider","title":"Add a Box Collider","text":"
  1. TrafficIntersection needs to be marked with a box collider. First click on the Add Component button.

  2. In the window that popped up search for Box Collider and select it.

  3. Then set the position and orientation and size of the Box Collider. You can do this by manipulating Box Collider properties Center and Size in the Inspector view.

    !!! info \"Traffic Intersection Box Collider guidelines\" When adding a Box Collider marking your Traffic Intersection please make sure that

      - It is **not** floating over the ground - there is no gap between the Box Collider and The Traffic Intersection\n  - It is high enough to cover all Vehicles that will be passing through the Intersection\n  - It accurately represents the shape, position and orientation of the Traffic Intersection\n
"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#add-a-traffic-intersection-script","title":"Add a Traffic Intersection Script","text":"
  1. Click on the Add Component button.

  2. In the window that popped up search for Traffic Intersection and select it.

  3. You need to set a proper Collider Mask in order for the script to work.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#create-traffic-light-groups","title":"Create traffic light groups","text":"

Traffic Light Groups are groups of traffic lights that are synchronized, meaning they light up with the same color and pattern at all times.

Traffic lights are divided into groups to simplify the process of creating a lighting sequence. By default you will see 4 Traffic Light Groups, you can add and remove them to suit your needs.

  1. First choose from the drop-down menu called Group the Traffic Light Group name you want to assign to your Traffic Light Group.

  2. Then add as many Traffic Lights as you want your group to have. From the drop-down menu select the Traffic Lights you want to add.

    !!! tip \"Select Traffic Lights visually\" If you have a lot of Traffic Lights it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#create-lighting-sequences","title":"Create lighting sequences","text":"

Lighting Sequences is a list of commands based on which the Traffic Lights will operate on an intersection. The elements in the Lighting Sequences list are changes (commands) that will be executed on the Traffic Light Groups.

Group Lighting Order should be interpreted as a command (or order) given to all Traffic Lights in selected Traffic Light Group. In Group Lighting Orders you can set different traffic light status for every Traffic Light Group (in separate elements). Lighting sequences list is processed in an infinite loop.

It should be noted that changes applied to one Traffic Light Group will remain the same until the next Group Lighting Order is given to this Traffic Light Group. This means that if in one Group Lighting Order no command is sent to a Traffic Light Group then this Group will remain its current lighting pattern (color, bulb and status).

For every Lighting Sequences Element you have to specify the following

  1. Interval Sec

    This is the time for which the sequence should wait until executing next order, so how long this state will be active.

  2. For every element in Group Lighting Orders there needs to be specified

    1. Group to which this order will be applied 2. List of orders (Bulb Data)

      In other words - what bulbs should be turned on, their color and pattern.\n\n  - Type - What type of bulb should be turned on\n  - Color - What color this bulb should have (in most cases this will be the same as color of the bulb if specified)\n  - Status - How the bulb should light up (selecting `SOLID_OFF` is necessary only when you want to turn the Traffic Light completely off, meaning **no** bulb will light up)\n\n  !!!note\n      When applying the change to a Traffic Light\n\n      - First all bulbs are turned off\n      - Only after that changes made in the order are applied\n\n      This means it is only necessary to supply the data about what bulbs should be turned on.\n      E.g. you don't have to turn off a red bulb when turning on the green one.\n

Warning

The first Element in the Lighting Sequences (in most cases) should contain bulb data for every Traffic Light Group. Traffic Light Groups not specified in the first Element will not light up at the beginning of the scene.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#example","title":"Example","text":"

Lets consider the following lighting sequence element.

In the Lighting Sequence Element 5 we tell all Traffic Lights in the Vehicle Traffic Light Group 2 to light up their Green Bulb with the color Green and status Solid On which means that they will be turned on all the time. We also implicitly tell them to turn all other Bulbs off.

In the same time we tell all Traffic Lights in the Pedestrian Traffic Light Group 2 to do the very same thing.

This state will be active for the next 15 seconds, and after that Traffic Intersection will move to the next Element in the Sequence.

Now lets consider the following Lighting Sequences Element 6.

Here we order the Traffic Lights in the Pedestrian Traffic Light Group 2 to light up their Green Bulb with the color Green and status Flashing. We also implicitly tell them to turn all other bulbs off, which were already off from the implicit change in Element 5, so this effectively does nothing.

Note that Lighting Sequences Element 6 has no orders for Vehicle Traffic Light Group 2. This means that Traffic Lights in the Vehicle Traffic Light Group 2 will hold on to their earlier orders.

This state will be active for 5 seconds, which means that Traffic Lights in the Vehicle Traffic Light Group 2 will be lighting solid green for the total of 20 seconds.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/AddTrafficIntersection/#how-to-test","title":"How to test","text":"

To test how your Traffic Intersection behaves simply run the Scene as shown here (but don't launch Autoware). To take a better look at the Traffic Lights you can change to the Scene view by pressing ctrl + 1 - now you can move the camera freely (to go back to the Game view simply press ctrl + 2).

As the time passes you can examine whether your Traffic Intersection is configured correctly.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/","title":"Load Items From Lanelet","text":""},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#load-items-from-lanelet2","title":"Load items from lanelet2","text":"

To add RandomTraffic to the Environment, it is necessary to load elements from the lanelet2. As a result of loading, TrafficLanes and StopLines will be added to the scene. Details of these components can be found here.

Warning

Before following this tutorial make sure you have added an Environment Script and set a proper MGRS offset position. This position is used when loading elements from the lanelet2!

  1. Click on the AWSIM button in the top menu of the Unity editor and navigate to AWSIM -> Random Traffic -> Load Lanelet.

  2. In the window that pops-up select your osm file, change some Waypoint Settings to suit your needs and click Load.

    !!! info \"Waypoint Settings explanation\" - Resolution - resolution of resampling. Lower values provide better accuracy at the cost of processing time - Min Delta Length - minimum length(m) between adjacent points - Min Delta Angle - minimum angle(deg) between adjacent edges. Lowering this value produces a smoother curve

  3. Traffic Lanes and Stop Lanes should occur in the Hierarchy view. If they appear somewhere else in your Hierarchy tree, then move them into the Environment object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#complete-loaded-trafficlanes","title":"Complete loaded TrafficLanes","text":"

The Traffic Lanes that were loaded should be configures accordingly to the road situation. The aspects you can configure

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#how-to-test","title":"How to test","text":"

If you want to test your Traffic Lanes you have to try running a Random Traffic. To verify one particular Traffic Lane or Traffic Lane connection you can make a new spawnable lane next to the Traffic Lane you want to test. This way you can be sure NPCVehicles will start driving on the Traffic Lane you are interested in at the beginning.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#add-a-stopline-manually","title":"Add a StopLine manually","text":"

When something goes wrong when loading data from lanelet2 or you just want to add another StopLine manually please do the following

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#1-add-a-gameobject","title":"1. Add a GameObject","text":"

Add a new GameObject StopLine in the StopLines parent object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#2-add-a-stopline-script","title":"2. Add a StopLine Script","text":"

Add a StopLine Script by clicking 'Add Component' and searching for Stop Line.

Example

So far your Stop Line should look like the following

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#3-set-points","title":"3. Set points","text":"

Set the position of points Element 0 and Element 1. These Elements are the two end points of a Stop Line. The Stop Line will span between these points.

You don't need to set any data in the 'Transform' section as it is not used anyway.

StopLine coordinate system

Please note that the Stop Line Script operates in the global coordinate system. The transformations of StopLine Object and its parent Objects won't affect the Stop Line.

Example

In this example you can see that the Position of the Game Object does not affect the position and orientation of the Stop Line.

For a Game Object in the center of the coordinate system.

The stop Line is in the specified position.

However with the Game Object shifted in X axis.

The Stop Line stays in the same position as before, not affected by any transformations happening to the Game Object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#4-has-stop-sign","title":"4. Has Stop Sign","text":"

Select whether there is a Stop Sign.

Select the Has Stop Sign tick-box confirming that this Stop Line has a Stop Sign. The Stop Sign can be either vertical or horizontal.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#5-select-a-traffic-light","title":"5. Select a Traffic Light","text":"

Select from the drop-down menu the Traffic Light that is on the Traffic Intersection and is facing the vehicle that would be driving on the Traffic Lane connected with the Stop Line you are configuring.

In other words select the right Traffic Light for the Lane on which your Stop Line is placed.

Select Traffic Lights visually

If you have a lot of Traffic Lights it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#6-configure-the-traffic-lane","title":"6. Configure the Traffic Lane","text":"

Every Stop Line has to be connected to a Traffic Lane. This is done in the Traffic Lane configuration. For this reason please check the Traffic Lane section for more details.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#add-a-trafficlane-manually","title":"Add a TrafficLane manually","text":"

It is possible that something may go wring when reading a lanelet2 and you need to add an additional Traffic Lane or you just want to add it. To add a Traffic Lane manually please follow the steps below.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#1-add-a-gameobject_1","title":"1. Add a GameObject","text":"

Add a new Game Object called TrafficLane into the TrafficLanes parent Object.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#2-add-a-traffic-lane-script","title":"2. Add a Traffic Lane Script","text":"

Click the 'Add Component' button and search for the Traffic lane script and select it.

Example

So far your Traffic Lane should look like the following.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#3-configure-waypoints","title":"3. Configure Waypoints","text":"

Now we will configure the 'Waypoints' list. This list is an ordered list of nest points defining the Traffic Lane. When you want to add a waypoint to a Traffic Lane just click on the + button or specify the number of waypoints on the list in the field with number to the right from 'Waypoints' identifier.

The order of elements on this list determines how waypoints are connected.

Traffic Lane coordinate system

Please note that the Traffic Lane waypoints are located in the global coordinate system, any transformations set to a Game Object or paren Objects will be ignored.

This behavior is the same as with the Stop Line. You can see the example provided in the Stop Line tutorial.

General advice

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#4-select-the-turn-direction","title":"4. Select the Turn Direction","text":"

You also need to select the Turn Direction. This field describes what are the vehicles traveling on ths Traffic Lane doing in reference to other Traffic Lanes. You need to select whether the vehicles are

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#5-configure-next-lanes","title":"5. Configure Next Lanes","text":"

You need to add all Traffic Lanes that have their beginning in the end of this Traffic Lane into the Next Lanes list. In other words if the vehicle can choose where he wants to drive (e.g. drive straight or drive left with choice of two different Traffic Lines).

To do this click the + sign in the Next Lanes list and in the element that appeared select the correct Traffic Lane.

Next Lane example

Lets consider the following Traffic Intersection.

In this example we will consider the Traffic Lane driving from the bottom of the screen and turning right. After finishing driving in this Traffic Lane the vehicle has a choice of 4 different Traffic Lanes each turning into different lane on the parallel road.

All 4 Traffic Lanes are connected to the considered Traffic Lane. This situation is reflected in the Traffic Lane configuration shown below.

Select Traffic Lanes visually

If you have a lot of Traffic Lanes it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#6-configure-previous-lanes","title":"6. Configure Previous Lanes","text":"

Traffic Lane has to have previous Traffic Lanes configured. This is done in the exact same way as configuring next lanes which was shown in the previous step. Please do the same, but add Traffic Lanes that are before the configured one instead of the ones after into the Prev Lanes list.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#7-configure-right-of-way","title":"7. Configure Right Of Way","text":"

Now we will configure the Right Of Way Lanes. The Right Of Way Lanes is a list of Traffic Lanes that have a priority over the configured one. The process of adding the Right Of Way Lanes is the same as with adding Next Lanes. For this reason we ask you to see the aforementioned step for detailed description on how to do this (the only difference is that you add Traffic Lanes to the Right Of Way Lanes list).

Right of way example

In this example lets consider the Traffic Lane highlighted in blue from the Traffic Intersection below.

This Traffic Lane has to give way to all Traffic Lanes highlighted on yellow. This means all of the yellow Traffic Lanes have to be added to the 'Right Of Way Lanes' list which is reflected on the configuration shown below.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#8-add-stop-line","title":"8. Add Stop Line","text":"

Adding a Stop Line is necessary only when at the end of the configured Traffic Lane the Stop Line is present. If so, please select the correct Stop Line from the drop-down list.

Select Stop Line visually

If you have a lot of Stop Lines it can be challenging to add them from the list. You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#9-add-speed-limit","title":"9. Add Speed Limit.","text":"

In the field called Speed Limit simply write the speed limit that is in effect on the configured Traffic Lane.

"},{"location":"Components/Environment/AddNewEnvironment/AddRandomTraffic/LoadItemsFromLanelet/#10-set-the-right-of-ways","title":"10. Set the Right of Ways","text":"

To make the Right Of Ways list you configured earlier take effect simply click the 'Set RightOfWays' button.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/","title":"Add TrafficLights","text":""},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#add-trafficlights","title":"Add TrafficLights","text":"

To add TrafficLights into your Environment follow steps below.

Tip

In the Environment you are creating there will most likely be many TrafficLights that should look and work the same way. To simplify the process of creating an environment it is advised to create one TrafficLight of each type with this tutorial and then save them as prefabs that you will be able to reuse.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#1-add-trafficlight-object","title":"1. Add TrafficLight Object","text":"

Into your Map object in the Hierarchy view add a new Child Object and name it appropriately.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#2-add-a-mesh-filter-and-select-meshes","title":"2. Add a Mesh Filter and select meshes","text":"
  1. Click on the Add Component button.

  2. Search for Mesh filter and select it by clicking on it.

  3. For each TrafficLight specify the mesh you want to use.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#3add-a-mesh-renderer-and-specify-materials","title":"3.Add a Mesh Renderer and specify materials","text":"
  1. The same way as above search for Mesh Renderer and select it.

  2. Now you need to specify individual component materials.

    For example in the Traffic.Lights.001 mesh there are four sub-meshes that need their individual materials.

    To specify a material click on the selection button on Materials element and search for the material you want to use and select it.

    Repeat this process until you specify all materials. When you add one material more than there are sub-meshes you will see this warning. Then just remove the last material and the TrafficLight is prepared.

    !!! info Different material for every bulb is necessary for the color changing behavior that we expect from traffic lights. Even though in most cases you will use the same material for every Bulb, having them as different elements is necessary. Please only use models of TrafficLights that have different Materials Elements for every Bulb.

    !!! warning \"Materials order\" When specifying materials remember the order in which they are used in the mesh. Especially remember what Materials Elements are associated with every Bulb in the TrafficLight. This information will be needed later.

      !!! example\n      In the case of `Traffic.Lights.001` the bulb materials are ordered starting from the left side with index 1 and increasing to the right.\n\n      ![bulb 1](traffic_light_1_bulb.png)\n\n      ![bulb 2](traffic_light_2_bulb.png)\n\n      ![bulb 3](traffic_light_3_bulb.png)\n
"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#4-add-a-mesh-collider","title":"4. Add a Mesh Collider","text":"

The same way as above search for Mesh Collider and select it. Collider may not seem useful, as the TrafficLight in many cases will be out of reach of vehicles. It is however used for LiDAR simulation, so it is advised to always add colliders to Objects that should be detected by LiDARs.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#5-position-trafficlight-in-the-environment","title":"5. Position TrafficLight in the Environment","text":"

Finally after configuring all visual aspects of the TrafficLight you can position it in the environment. Do this by dragging a TrafficLight with a square representing a plane or with an arrow representing one axis.

"},{"location":"Components/Environment/AddNewEnvironment/AddTrafficLights/#6-add-trafficlight-script","title":"6. Add TrafficLight Script","text":"

The Traffic Light Script will enable you to control how the TrafficLight lights up and create sequences.

  1. Click on Add Component, search for the Traffic Light script and select it.

  2. You should see the Bulb Emission config already configured. These are the colors that will be used to light up the Bulbs in TrafficLight. You may adjust them to suit your needs.

  3. You will have to specify Bulb material config, in which you should add elements with fields: - Bulb Type - One of the predefined Bulb types that describes the Bulb (its color and pattern).

    - Material Index - Index of the material that you want to be associated with the Bulb Type. This is where you need to use the knowledge from earlier where we said you have to remember what Materials Element corresponds to which bulb sub-mesh.

    !!! example \"Bulb configuration example\" Here we specify an element Type as RED_BULB and associate it with Material that has an index 3. This will result in associating the right most bulb with the name RED_BULB. This information will be of use to us when specifying TrafficLights sequences.

      ![traffic light bulb config](traffic_light_bulb_config.gif)\n

Success

Once you have added TrafficLights to your Environment, you can start configuring RandomTraffic which will add moving vehicles to it! Details here.

"},{"location":"Components/Environment/CreatePCD/","title":"Create PCD","text":""},{"location":"Components/Environment/CreatePCD/#create-pcd","title":"Create PCD","text":"

This section

This section is still under development!

"},{"location":"Components/Environment/CreatePCD/#pointcloudmapper","title":"PointCloudMapper","text":""},{"location":"Components/Environment/CreatePCD/#description","title":"Description","text":"

PointCloudMapper is a tool for a vehicle based point cloud mapping in a simulation environment. It is very useful when you need a point cloud based on some location, but don't have the possibility to physically map the real place. Instead you can map the simulated environment.

"},{"location":"Components/Environment/CreatePCD/#required-data","title":"Required Data","text":"

To properly perform the mapping, make sure you have the following files downloaded and configured:

"},{"location":"Components/Environment/CreatePCD/#import-osm","title":"Import OSM","text":"
  1. Drag and drop an OSM file into Unity project.

  2. OSM file will be imported as OsmDataContainer.

"},{"location":"Components/Environment/CreatePCD/#setup-an-environment","title":"Setup an Environment","text":"

For mapping an Environment prefab is needed. The easiest way is to create a new Scene and import the Environment prefab into it. Details on how to do this can be found on this tutorial page.

"},{"location":"Components/Environment/CreatePCD/#setup-a-vehicle","title":"Setup a Vehicle","text":"

Create a Vehicle GameObject in the Hierarchy view.

"},{"location":"Components/Environment/CreatePCD/#add-visual-elements-optional","title":"Add visual elements (optional)","text":"

Add vehicle model by adding a Geometry Object as a child of Vehicle and adding all visual elements as children.

Visual elements

You can learn how to add visual elements and required components like Mesh Filter or Mesh Renderer in this tutorial.

"},{"location":"Components/Environment/CreatePCD/#add-a-camera-optional","title":"Add a Camera (optional)","text":"

Add a Camera component for enhanced visuals by adding a Main Camera Object as a child of Vehicle Object and attaching a Camera Component to it.

  1. Add a Main Camera Object.

  2. Add a Camera Component by clicking 'Add Component' button, searching for it and selecting it.

  3. Change the Transform for an even better visual experience.

    !!! note \"Camera preview\" Observe how the Camera preview changes when adjusting the transformation.

"},{"location":"Components/Environment/CreatePCD/#setup-vehicle-sensors-rgl","title":"Setup Vehicle Sensors (RGL)","text":"

This part of the tutorial shows how to add a LiDAR sensor using RGL.

RGL Scene Manager

Please make sure that RGLSceneManager is added to the scene. For more details and instruction how to do it please visit this tutorial page.

  1. Create an empty Sensors GameObject as a child of the Vehicle Object.

  2. Create a Lidar GameObject as a child of the Sensors Object.

  3. Attach Lidar Sensor (script) to previously created Lidar Object by clicking on the 'Add Component' button, searching for the script and selecting it.

    !!! note \"Point Cloud Visualization\" Please note that Point Cloud Visualization (script) will be added automatically with the Lidar Sensor (script).

  4. Configure LiDAR pattern, e.g. by selecting one of the available presets.

    !!! example \"Example Lidar Sensor configuration\"

    !!! note \"Gaussian noise\" Gaussian noise should be disabled to achieve a more accurate map.

  5. Attach RGL Mapping Adapter (script) to previously created Lidar Object by clicking on the 'Add Component' button, searching for the script and selecting it.

  6. Configure RGL Mapping Adapter - e.g. set Leaf Size for filtering.

    !!! example \"Example RGL Mapping Adapter configuration\"

    !!! note \"Downsampling\" Please note that downsampling is applied on the single LiDAR scans only. If you would like to filter merged scans use the external tool described below.

"},{"location":"Components/Environment/CreatePCD/#effect-of-leaf-size-to-point-cloud-data-pcd-generation","title":"Effect of Leaf Size to Point Cloud Data (PCD) generation","text":"

Downsampling aims to reduce PCD size which for large point clouds may achieve gigabytes in exchange for map details. It is essential to find the best balance between the size and acceptable details level.

A small Leaf Size results in a more detailed PCD, while a large Leaf Size could result in excessive filtering such that objects like buildings are not recorded in the PCD.

In the following examples, it can be observed that when a Leaf Size is 1.0, point cloud is very detailed. When a Leaf Size is 100.0, buildings are filtered out and results in an empty PCD. A Leaf Size of 10.0 results in a reasonable PCD in the given example.

Leaf Size = 1.0 Leaf Size = 10.0 Leaf Size = 100.0"},{"location":"Components/Environment/CreatePCD/#setup-pointcloudmapper","title":"Setup PointCloudMapper","text":"
  1. Create a PointCloudMapper GameObject in the Hierarchy view.

  2. Attach Point Cloud Mapper script to previously created Point Cloud Mapper Object by clicking on the 'Add Component' button, searching for the script and selecting it.

  3. Configure the Point Cloud Mapper fields:

    - Osm Container - the OSM file you imported earlier - World Origin - MGRS position of the origin of the scene

      !!! note \"World Origin coordinate system\"\n      Use [*ROS* coordinate system](../../Vehicle/AddNewVehicle/AddSensors/#coordinate-system-conversion) for *World Origin*, not Unity.\n

    - Capture Location Interval - Distance between consecutive capture points along lanelet centerline - Output Pcd File Path - Output relative path from Assets folder - Target Vehicle - The vehicle you want to use for point cloud capturing that you created earlier

    !!! example \"Example Point Cloud Mapper configuration\"

    !!! note \"Lanelet visualization\" It is recommended to disable Lanelet Visualizer by setting Material to None and Width equal to zero. Rendered Lanelet is not ignored by the LiDAR so it would be captured in the PCD.

"},{"location":"Components/Environment/CreatePCD/#effect-of-capture-location-interval-to-pcd-generation","title":"Effect of Capture Location Interval to PCD generation","text":"

If the Capture Location Interval is too small, it could result in a sparse PCD where some region of the map is captured well but the other regions aren't captured at all.

In the below example, Leaf Size of 0.2 was used. Please note that using a different combination of leaf size and Capture Location Interval may result in a different PCD.

Capture Location Interval = 6 Capture Location Interval = 20 Capture Location Interval = 100"},{"location":"Components/Environment/CreatePCD/#capture-and-generate-pcd","title":"Capture and Generate PCD","text":"

If you play simulation with a scene prepared with the steps above, PointCloudMapper will automatically start mapping. The vehicle will warp along centerlines by intervals of CaptureLocationInterval and capture point cloud data. PCD file will be written when you stop your scene or all locations in the route are captured.

If the Vehicle stops moving for longer and you see the following message in the bottom left corner - you can safely stop the scene.

The Point cloud *.pcd file is saved to the location you specified in the Point Cloud Mapper.

"},{"location":"Components/Environment/CreatePCD/#pcd-postprocessing","title":"PCD postprocessing","text":"

Install required tool

The tool (DownsampleLargePCD) required for PCD conversion can be found under the link. README contains building instruction and usage.

The generated PCD file is typically too large. Therefore you need to downsample it. Also, it should be converted to ASCII format because Autoware accepts only this format. PointCloudMapper returns PCD in binary format.

  1. Change the working directory to the location with DownsampleLargePCD tool.
  2. Use this tool to downsample and save PCD in ASCII format.

    ./DownsampleLargePCD -in <PATH_TO_INPUT_PCD> -out <PATH_TO_OUTPUT_PCD> -leaf 0.2,0.2,0.2\n
    - Assuming input PCD is in your working directory and named in_cloud.pcd and output PCD is to be named out_cloud.pcd the command will be:
    ./DownsampleLargePCD -in in_cloud.pcd -out out_cloud.pcd -leaf 0.2,0.2,0.2\n
    - You can also save PCD in binary format by adding -binary 1 option.
  3. Your PCD is ready to use.

Converting PCD format without downsampling

If you don't want to downsample your PCD you can convert PCD file to ASCII format with pcl_convert_pcd_ascii_binary tool. This tool is available in the pcl-tools package and can be installed on Ubuntu with the following command:

sudo apt install pcl-tools\n
To convert your PCD use command:
pcl_convert_pcd_ascii_binary <PATH_TO_INPUT_PCD> <PATH_TO_OUTPUT_PCD> 0\n
"},{"location":"Components/Environment/CreatePCD/#verify-the-pcd","title":"Verify the PCD","text":"

To verify your PCD you can launch the Autoware with the PCD file specified.

  1. Copy your PCD from the AWSIM project directory to the Autoware map directory.

    cp <PATH_TO_PCD_FILE> <PATH_TO_AUTOWARE_MAP>/\n
  2. Source the ROS and Autoware

    source /opt/ros/humble/setup.bash\nsource <PATH_TO_AUTOWARE>/install/setup.bash\n
  3. Launch the planning simulation with the map directory path (map_path) and PCD file (pointcloud_map_file) specified.

    !!! note \"PCD file location\" The PCD file needs to be located in the Autoware map directory and as a pointcloud_map_file parameter you only supply the file name, not the path.

    !!! warning \"Absolute path\" When launching Autoware never use ~/ to specify the home directory. Either write the full absolute path ot use $HOME environmental variable.

    ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit map_path:=<ABSOLUTE_PATH_TO_AUTOWARE_MAP> pointcloud_map_file:=<PCD_FILE_NAME>\n
  4. Wait for the Autoware to finish loading and inspect the PCD visually given the Effect of Leaf Size and Effect of Capture Location Interval.

"},{"location":"Components/Environment/CreatePCD/#sample-scene","title":"Sample Scene","text":"

PointCloudMapping.unity is a sample scene for PointCloudMapper showcase. It requires setup of OSM data and 3D model map of the area according to the steps above.

Sample Mapping Scene

In this example you can see a correctly configured Point Cloud Mapping Scene.

"},{"location":"Components/Environment/LaneletBoundsVisualizer/","title":"LaneletBoundsVisualizer","text":""},{"location":"Components/Environment/LaneletBoundsVisualizer/#lanelet-bounds-visualizer","title":"Lanelet Bounds Visualizer","text":"

Lanelet Bounds Visualizer is an Unity Editor extension allowing the user to load the left and right bounds of Lanelet to the Unity scene.

"},{"location":"Components/Environment/LaneletBoundsVisualizer/#usage","title":"Usage","text":"

The lanelet bounds load process can be performed by opening AWSIM -> Visualize -> Load Lanelet Bounds at the top toolbar of Unity Editor.

A window shown below will pop up. Select your Osm Data Container to specify which OSM data to load the Lanelet from.

The user can select whether to load the raw Lanelet or to adjust the resolution of the Lanelet by specifying the waypoint settings.

To load the raw Lanelet, simply click the Load Raw Lanelet button.

If the user wishes to change the resolution of the Lanelet, adjust the parameters of the Waypoint Settings as described below, and click the Load with Waypoint Settings button.

Once the Lanelet is successfully loaded, Lanelet bounds will be generated as a new GameObject named LaneletBounds.

To visualize the LaneletBounds, make sure Gizmos is turned on and select the LaneletBounds GameObject.

"},{"location":"Components/Environment/LaneletBoundsVisualizer/#important-notes","title":"Important Notes","text":"

Generally speaking, visualizing Lanelet Bounds will result in a very laggy simulation. Therefore, it is recommended to hide the LaneletBounds GameObject when not used. The lag of the simulation becomes worse as you set the resolution of the Lanelet Bounds higher, so it is also recommended to set the resolution within a reasonable range.

It is also important to note that no matter how high you set the resolution to be, it will not be any better than the original Lanelet (i.e. the raw data). Rather, the computational load will increase and the simulation will become more laggy. If the user wishes to get the highest quality of Lanelet Bounds, it is recommended to use the Load Raw Lanelet button.

In short, Waypoint Setting parameters should be thought of as parameters to decrease the resolution from the original Lanelet to decrease the computational load and thus, reducing the lag of the simulation.

Higher Resolution Raw Lanelet Lower Resolution"},{"location":"Components/Environment/SmokeSimulator/","title":"SmokeSimulator","text":""},{"location":"Components/Environment/SmokeSimulator/#smoke-simulator","title":"Smoke Simulator","text":"

Simulating smoke in AWSIM may be useful when one wants to simulate exhaust gases from vehicles, smoke from emergency flare, etc.

In Unity, it is common to use Particle System to simulate smokes. However, smoke simulated by Particle System cannot be sensed by RGL in AWSIM although in reality, smokes are detected by LiDAR.

Smoke Simulator was developed to simulate smokes that can be detected by RGL in Unity. Smoke Simulator works by instantiating many small cubic GameObjects called Smoke Particles and allows each particle to be detected by RGL.

This document describes how to use the Smoke Simulator.

"},{"location":"Components/Environment/SmokeSimulator/#setting-smoke-simulator","title":"Setting Smoke Simulator","text":"

1.Create an empty GameObject

2.Attach SmokeGenerator.cs to the previously created GameObject.

3.Adjust the parameters of the SmokeGenerator as described below:

4.(Optional): You may also specify the Material of Smoke Particles. If this field is unspecified, a default material is used

"},{"location":"Components/Environment/SmokeSimulator/#example-of-different-smokes","title":"Example of Different Smokes","text":""},{"location":"Components/Environment/SmokeSimulator/#thin-smoke","title":"Thin Smoke","text":""},{"location":"Components/Environment/SmokeSimulator/#heavy-smoke","title":"Heavy Smoke","text":""},{"location":"Components/Environment/V2I/","title":"V2I","text":""},{"location":"Components/Environment/V2I/#v2i-vehicle-to-infrastructure","title":"V2I (Vehicle-to-Infrastructure)","text":"

V2I is a component that simulates V2I communication protocol which allows to exchange data between vehicles and road infrastructure. In the current version of AWSIM, the V2I component publishes information about traffic lights.

"},{"location":"Components/Environment/V2I/#how-to-add-v2i-to-the-environment","title":"How to add V2I to the environment","text":""},{"location":"Components/Environment/V2I/#assign-lanelet2-wayid-and-relationid-to-trafficlight-object","title":"Assign Lanelet2 WayID and RelationID to TrafficLight object","text":"
  1. Load items from lanelet2 following the instruction

  2. Verify if Traffic Light Lanelet ID component has been added to Traffic Light game objects.

  3. Verify if WayID and RelationID has been correctly assigned. You can use Vector Map Builder as presented below

"},{"location":"Components/Environment/V2I/#add-manually-traffic-light-lanelet-id-component-alternatively","title":"Add manually Traffic Light Lanelet ID component (alternatively)","text":"

If for some reason, Traffic Light Lanelet ID component is not added to Traffic Light object.

  1. Add component manually

  2. Fill Way ID

  3. Fill Relation ID

"},{"location":"Components/Environment/V2I/#add-v2i-prefab","title":"Add V2I prefab","text":""},{"location":"Components/Environment/V2I/#select-ego-transform","title":"Select EGO transform","text":""},{"location":"Components/Environment/V2I/#parameters","title":"Parameters","text":"Name Type Description Output Hz int Topic publication frequency Ego Vehicle Transform transform Ego Vehicle object transform Ego Distance To Traffic Signals double Maximum distance between Traffic Light and Ego Traffic Signal ID enum Possibility to select if as traffic_signal_id field in msg is Relation ID or Way ID Traffic Signals Topic string Topic name

Note

V2I feature can be used as Traffic Light ground truth information, and for that usage Way ID is supposed to be selected.

"},{"location":"Components/ROS2/AddACustomROS2Message/","title":"Add custom ROS2 msg","text":""},{"location":"Components/ROS2/AddACustomROS2Message/#add-a-custom-ros2-message","title":"Add a custom ROS2 message","text":"

If you want to use custom message in AWSIM, you need to generate the appropriate files, to do this you have to build ROS2ForUnity yourself - please follow the steps below. Remember to start with prerequisities though.

ROS2ForUnity role

For a better understanding of the role of ROS2ForUnity and the messages used, we encourage you to read this section.

custom_msgs

In order to simplify this tutorial, the name of the package containing the custom message is assumed to be custom_msgs - remember to replace it with the name of your package.

"},{"location":"Components/ROS2/AddACustomROS2Message/#prerequisites","title":"Prerequisites","text":"

ROS2ForUnity depends on a ros2cs - a C# .NET library for ROS2. This library is already included so you don't need to install it, but there are a few prerequisites that must be resolved first.

Please select your system and resolve all prerequisites:

UbuntuWindows "},{"location":"Components/ROS2/AddACustomROS2Message/#1-workspace-preparation","title":"1. Workspace preparation","text":"
  1. Clone ROS2ForUnity repository by execute command:

    === \"Ubuntu\"

    git clone https://github.com/RobotecAI/ros2-for-unity ~/\n
    !!! warning The cloned ROS 2 For Unity repository must be located in the home directory ~/. === \"Windows\"
    git clone https://github.com/RobotecAI/ros2-for-unity /C\n
    !!! warning The cloned ROS 2 For Unity repository must be located in the home directory C:\\.
  2. Pull dependent repositories by execute commands:

    === \"Ubuntu\"

    cd ~/ros2-for-unity\n. /opt/ros/humble/setup.bash\n./pull_repositories.sh\n

    === \"Windows\"

    cd C:\\ros2-for-unity\nC:\\ros2_humble\\local_setup.ps1\n.\\pull_repositories.ps1\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#2-setup-custom_msgs-package","title":"2. Setup custom_msgs package","text":"

The method to add a custom package to build depends on where it is located. The package can be on your local machine or just be hosted on a git repository. Please, choose the appropriate option and follow the instructions.

"},{"location":"Components/ROS2/AddACustomROS2Message/#21-package-contained-on-local-machine","title":"2.1. Package contained on local machine","text":"
  1. Copy the custom_msgs package with custom message to the folder to src/ros2cs/custom_messages directory

    === \"Ubuntu\"

    cp -r ~/custom_msgs ~/ros2-for-unity/src/ros2cs/custom_messages/\n

    === \"Windows\"

    Copy-Item 'C:\\custom_msgs' -Destination 'C:\\ros2-for-unity\\src\\custom_messages'\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#22-package-hosted-on-git-repository","title":"2.2. Package hosted on git repository","text":"
  1. Open ros2-for-unity/ros2_for_unity_custom_messages.repos file in editor.
  2. Modify the contents of the file shown below, uncomment and set:

    - <package_name> - to your package name - so in this case custom_msgs, - <repo_url> - to repository address, - <repo_branch> - to desired branch.

    repositories:\n#  src/ros2cs/custom_messages/<package_name>:\n#    type: git\n#    url: <repo_url>\n#    version: <repo_branch>\n

    !!! example Below is an example of a file configured to pull 2 packages (custom_msgs,autoware_auto_msgs) of messages hosted on a git repository.

    # NOTE: Use this file if you want to build with custom messages that reside in a separate remote repo.\n# NOTE: use the following format\n\nrepositories:\n    src/ros2cs/custom_messages/custom_msgs:\n        type: git\n        url: https://github.com/tier4/custom_msgs.git\n        version: main\n    src/ros2cs/custom_messages/autoware_auto_msgs:\n        type: git\n        url: https://github.com/tier4/autoware_auto_msgs.git\n        version: tier4/main\n
  3. Now pull the repositories again (also the custom_msgs package repository)

    === \"Ubuntu\"

    cd ~/ros2-for-unity\n./pull_repositories.sh\n

    === \"Windows\"

    cd C:\\ros2-for-unity\n.\\pull_repositories.ps1\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#3-build-ros-2-for-unity","title":"3. Build ROS 2 For Unity","text":"

Build ROS2ForUnity with custom message packages using the following commands:

UbuntuWindows
cd ~/ros2-for-unity\n./build.sh --standalone\n
cd C:\\ros2-for-unity\n.\\build.ps1 -standalone\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#4-install-custom_msgs-to-awsim","title":"4. Install custom_msgs to AWSIM","text":"

New ROS2ForUnity build, which you just made in step 3, contains multiple libraries that already exist in the AWSIM. To install custom_msgs and not copy all other unnecessary files, you should get the custom_msgs related libraries only.

You can find them in following directories and simply copy to the analogous directories in AWSIM/Assets/Ros2ForUnity folder, or use the script described here.

UbuntuWindows

- ros2-for-unity/install/asset/Ros2ForUnity/Plugins which names matches custom_msgs_* - ros2-for-unity/install/asset/Ros2ForUnity/Plugins/Windows/x86_64/ which names matches custom_msgs_*

"},{"location":"Components/ROS2/AddACustomROS2Message/#automation-of-copying-message-files","title":"Automation of copying message files","text":"UbuntuWindows

To automate the process, you can use a script that copies all files related to your custom_msgs package.

  1. Create a file named copy_custom_msgs.sh in directory ~/ros2-for-unity/ and paste the following content into it.
    #!/bin/bash\necho \"CUSTOM_MSGS_PACKAGE_NAME: $1\"\necho \"AWSIM_DIR_PATH: $2\"\nfind ./install/asset/Ros2ForUnity/Plugins -maxdepth 1 -name \"$1*\"    -type f -exec cp {} $2/Assets/Ros2ForUnity/Plugins \\;\nfind ./install/asset/Ros2ForUnity/Plugins/Linux/x86_64 -maxdepth 1 -name     \"lib$1*\" -type f -exec cp {} $2/Assets/Ros2ForUnity/Plugins/Linux/x86_64 \\;\n
  2. Save the file and give it executable rights with the command:
    chmod a+x copy_msgs.sh\n
  3. Run the script with two arguments:
    ./copy_custom_msgs.sh <CUSTOM_MSGS_PACKAGE_NAME> <AWSIM_DIR_PATH>\n

Example

./copy_custom_msgs.sh custom_msgs ~/unity/AWSIM/\n

To automate the process, you can use these commands with changed:

Example

Get-ChildItem C:\\ros2-for-unity\\install\\asset\\Ros2ForUnity\\Plugins\\* -Include @('custom_msgs*') | Copy-Item -Destination C:\\unity\\AWSIM\\Assets\\Ros2ForUnity\\Plugins\nGet-ChildItem C:\\ros2-for-unity\\install\\asset\\Ros2ForUnity\\Plugins\\Windows\\x86_64\\* -Include @('custom_msgs*') | Copy-Item -Destination C:\\unity\\AWSIM\\Assets\\Ros2ForUnity\\Plugins\\Windows\\x86_64\n
"},{"location":"Components/ROS2/AddACustomROS2Message/#5-test","title":"5. Test","text":"

Make sure that the package files custom_msgs have been properly copied to the AWSIM/Assets/Ros2ForUnity. Then try to create a message object as described in this section and check in the console of Unity Editor if it compiles without errors.

"},{"location":"Components/ROS2/ROS2ForUnity/","title":"ROS2 For Unity","text":""},{"location":"Components/ROS2/ROS2ForUnity/#ros2-for-unity","title":"ROS2 For Unity","text":"

Ros2ForUnity (R2FU) module is a communication solution that effectively connects Unity and the ROS2 ecosystem, maintaining a strong integration. Unlike other solutions, it doesn't rely on bridging communication but rather utilizes the ROS2 middleware stack (specifically the rcl layer and below), enabling the inclusion of ROS2 nodes within Unity simulations.

R2FU is used in AWSIM for many reasons. First of all, because it offers high-performance integration between Unity and ROS2, with improved throughput and lower latencies compared to bridging solutions. It provides real ROS2 functionality for simulation entities in Unity, supports standard and custom messages, and includes convenient abstractions and tools, all wrapped as a Unity asset. For a detailed description, please see README.

"},{"location":"Components/ROS2/ROS2ForUnity/#prerequisites","title":"Prerequisites","text":"

This asset can be prepared in two flavours:

By default, asset R2FU in AWSIM is prepared in standalone mode.

Warning

To avoid internal conflicts between the standalone libraries, and sourced ones, ROS2 instance shouldn't be sourced before running AWSIM or the Unity Editor.

Can't see topics

There are no errors but I can't see topics published by R2FU

Try to stop it forcefully (pkill -9 ros2_daemon) and restart (ros2 daemon start).

"},{"location":"Components/ROS2/ROS2ForUnity/#concept","title":"Concept","text":"

Describing the concept of using R2FU in AWSIM, we distinguish:

The SimulatorROS2Node implementation, thanks to the use of R2FU, allows you to add communication via ROS2 to any Unity component. For example, we can receive control commands from any other ROS2 node and publish the current state of Ego, such as its position in the environment.

Simulation time

If you want to use system time (ROS2 time) instead of Unity time, use ROS2TimeSource instead of UnityTimeSource in the SimulatorROS2Node class.

"},{"location":"Components/ROS2/ROS2ForUnity/#package-structure","title":"Package structure","text":"

Ros2ForUnity asset contains:

"},{"location":"Components/ROS2/ROS2ForUnity/#scripts","title":"Scripts","text":""},{"location":"Components/ROS2/ROS2ForUnity/#extension-scripts","title":"Extension Scripts","text":"

Additionally, in order to adapt AWSIM to the use of R2FU, the following scripts are used:

"},{"location":"Components/ROS2/ROS2ForUnity/#default-message-types","title":"Default message types","text":"

The basic ROS2 msgs types that are supported in AWSIM by default include:

In order for the message package to be used in Unity, its *.dll and *.so libraries must be generated using R2FU.

Custom message

If you want to generate a custom message to allow it to be used in AWSIM please read this tutorial.

"},{"location":"Components/ROS2/ROS2ForUnity/#use-of-generated-messages-in-unity","title":"Use of generated messages in Unity","text":"

Each message type is composed of other types - which can also be a complex type. All of them are based on built-in C# types. The most common built-in types in messages are bool, int, double and string. These types have their communication equivalents using ROS2.

A good example of a complex type that is added to other complex types in order to specify a reference - in the form of a timestamp and a frame - is std_msgs/Header. This message has the following form:

builtin_interfaces/msg/Time stamp\nstring frame_id\n

ROS2 directive

In order to work with ROS2 in Unity, remember to add the directive using ROS2; at the top of the file to import types from this namespace.

"},{"location":"Components/ROS2/ROS2ForUnity/#create-an-object","title":"Create an object","text":"

The simplest way to create an object of Header type is:

var header = new std_msgs.msg.Header()\n{\n    Frame_id = \"map\"\n}\n

It is not required to define the value of each field. As you can see, it creates an object, filling only frame_id field - and left the field of complex builtin_interfaces/msg/Time type initialized by default. Time is an important element of any message, how to fill it is written here.

"},{"location":"Components/ROS2/ROS2ForUnity/#accessing-and-filling-in-message-fields","title":"Accessing and filling in message fields","text":"

As you might have noticed in the previous example, a ROS2 message in Unity is just a structure containing the same fields - keep the same names and types. Access to its fields for reading and filling is the same as for any C# structure.

var header2 = new std_msgs.msg.Header();\nheader2.Frame_id = \"map\";\nheader2.Stamp.sec = \"1234567\";\nDebug.Log($\"StampSec: {header2.Stamp.sec} and Frame: {header2.Frame_id}\");\n

Field names

There is one always-present difference in field names. The first letter of each message field in Unity is always uppercase - even if the base ROS2 message from which it is generated is lowercase.

"},{"location":"Components/ROS2/ROS2ForUnity/#filling-a-time","title":"Filling a time","text":"

In order to complete the time field of the Header message, we recommend the following methods in AWSIM:

  1. When the message has no Header but only the Time type:

    var header2 = new std_msgs.msg.Header();\nheader2.Stamp = SimulatorROS2Node.GetCurrentRosTime();\n
  2. When the message has a Header - like for example autoware_auto_vehicle_msgs/VelocityReport:

    velocityReportMsg = new autoware_auto_vehicle_msgs.msg.VelocityReport()\n{\n    Header = new std_msgs.msg.Header()\n    {\n        Frame_id = \"map\",\n    }\n};\nvar velocityReportMsgHeader = velocityReportMsg as MessageWithHeader;\nSimulatorROS2Node.UpdateROSTimestamp(ref velocityReportMsgHeader);\n

These methods allow to fill the Time field in the message object with the simulation time - from ROS2Clock

"},{"location":"Components/ROS2/ROS2ForUnity/#create-a-message-with-array","title":"Create a message with array","text":"

Some message types contain an array of some type. An example of such a message is nav_msgs/Path, which has a PoseStamped array. In order to fill such an array, you must first create a List<T>, fill it and then convert it to a raw array.

var posesList = new List<geometry_msgs.msg.PoseStamped>();\nfor(int i=0; i<=5;++i)\n{\n    var poseStampedMsg = new geometry_msgs.msg.PoseStamped();\n    poseStampedMsg.Pose.Position.X = i;\n    poseStampedMsg.Pose.Position.Y = 5-i;\n    var poseStampedMsgHeader = poseStampedMsg as MessageWithHeader;\n    SimulatorROS2Node.UpdateROSTimestamp(ref poseStampedMsgHeader);\n    posesList.Add(poseStampedMsg);\n}\nvar pathMsg = new nav_msgs.msg.Path(){Poses=posesList.ToArray()};\nvar pathMsgHeader = pathMsg as MessageWithHeader;\nSimulatorROS2Node.UpdateROSTimestamp(ref pathMsgHeader);\n// pathMsg is ready\n
"},{"location":"Components/ROS2/ROS2ForUnity/#publish-on-the-topic","title":"Publish on the topic","text":"

In order to publish messages, a publisher object must be created. The static method CreatePublisher of the SimulatorROS2Node makes it easy. You must specify the type of message, the topic on which it will be published and the QoS profile. Below is an example of autoware_auto_vehicle_msgs.msg.VelocityReport type message publication with a frequency of 30Hz on /vehicle/status/velocity_status topic, the QoS profile is (Reliability=Reliable, Durability=Volatile, History=Keep last, Depth=1):

using UnityEngine;\nusing ROS2;\n\nnamespace AWSIM\n{\n    public class VehicleReportRos2Publisher : MonoBehaviour\n    {\n        float timer = 0;\n        int publishHz = 30;\n        QoSSettings qosSettings = new QoSSettings()\n        {\n            ReliabilityPolicy = ReliabilityPolicy.QOS_POLICY_RELIABILITY_RELIABLE,\n            DurabilityPolicy = DurabilityPolicy.QOS_POLICY_DURABILITY_VOLATILE,\n            HistoryPolicy = HistoryPolicy.QOS_POLICY_HISTORY_KEEP_LAST,\n            Depth = 1,\n        };\n        string velocityReportTopic = \"/vehicle/status/velocity_status\";\n        autoware_auto_vehicle_msgs.msg.VelocityReport velocityReportMsg;\n        IPublisher<autoware_auto_vehicle_msgs.msg.VelocityReport> velocityReportPublisher;\n\n        void Start()\n        {\n            // Create a message object and fill in the constant fields\n            velocityReportMsg = new autoware_auto_vehicle_msgs.msg.VelocityReport()\n            {\n                Header = new std_msgs.msg.Header()\n                {\n                    Frame_id = \"map\",\n                }\n            };\n\n            // Create publisher with specific topic and QoS profile\n            velocityReportPublisher = SimulatorROS2Node.CreatePublisher<autoware_auto_vehicle_msgs.msg.VelocityReport>(velocityReportTopic, qosSettings.GetQoSProfile());\n        }\n\n         bool NeedToPublish()\n        {\n            timer += Time.deltaTime;\n            var interval = 1.0f / publishHz;\n            interval -= 0.00001f;\n            if (timer < interval)\n                return false;\n            timer = 0;\n            return true;\n        }\n\n        void FixedUpdate()\n        {\n            // Provide publications with a given frequency\n            if (NeedToPublish())\n            {\n                // Fill in non-constant fields\n                velocityReportMsg.Longitudinal_velocity = 1.00f;\n                velocityReportMsg.Lateral_velocity = 0.00f;\n                velocityReportMsg.Heading_rate = 0.00f;\n\n                // Update Stamp\n                var velocityReportMsgHeader = velocityReportMsg as MessageWithHeader;\n                SimulatorROS2Node.UpdateROSTimestamp(ref velocityReportMsgHeader);\n\n                // Publish\n                velocityReportPublisher.Publish(velocityReportMsg);\n            }\n        }\n    }\n}\n
"},{"location":"Components/ROS2/ROS2ForUnity/#upper-limit-to-publish-rate","title":"Upper limit to publish rate","text":"

The above example demonstrates the implementation of the 'publish' method within the FixedUpdate Unity event method. However, this approach has certain limitations. The maximum output frequency is directly tied to the current value of Fixed TimeStep specified in the Project Settings. Considering that the AWSIM is targeting a frame rate of 60 frames per second (FPS), the current Fixed TimeStep is set to 1/60s. And this impose 60Hz as a limitation on the publish rate for any sensor, which is implemented within FixedUpdate method. In case a higher output frequency be necessary, an alternative implementation must be considered or adjustments made to the Fixed TimeStep setting in the Editor->Project Settings->Time.

The table provided below presents a list of sensors along with examples of topics that are constrained by the Fixed TimeStep limitation.

Object Topic GNSS Sensor /sensing/gnss/pose IMU Sensor /sensing/imu/tamagawa/imu_raw Traffic Camera /sensing/camera/traffic_light/image_raw Pose Sensor /awsim/ground_truth/vehicle/pose OdometrySensor /awsim/ground_truth/localization/kinematic_state LIDAR /sensing/lidar/top/pointcloud_raw Vehicle Status /vehicle/status/velocity_status

If the sensor or any other publishing object within AWSIM does not have any direct correlation with physics (i.e., does not require synchronization with physics), it can be implemented without using the FixedUpdate method. Consequently, this allows the bypass of upper limits imposed by the Fixed TimeStep.

The table presented below shows a list of objects that are not constrained by the Fixed TimeStep limitation.

Object Topic Clock /clock"},{"location":"Components/ROS2/ROS2ForUnity/#subscribe-to-the-topic","title":"Subscribe to the topic","text":"

In order to subscribe messages, a subscriber object must be created. The static method CreateSubscription of the SimulatorROS2Node makes it easy. You must specify the type of message, the topic from which it will be subscribed and the QoS profile. In addition, the callback must be defined, which will be called when the message is received - in particular, it can be defined as a lambda expression. Below is an example of std_msgs.msg.Bool type message subscription on /vehicle/is_vehicle_stopped topic, the QoS profile is \u201csystem default\u201d:

using UnityEngine;\nusing ROS2;\n\nnamespace AWSIM\n{\n    public class VehicleStoppedSubscriber : MonoBehaviour\n    {\n        QoSSettings qosSettings = new QoSSettings();\n        string isVehicleStoppedTopic = \"/vehicle/is_vehicle_stopped\";\n        bool isVehicleStopped = false;\n        ISubscription<std_msgs.msg.Bool> isVehicleStoppedSubscriber;\n\n        void Start()\n        {\n            isVehicleStoppedSubscriber = SimulatorROS2Node.CreateSubscription<std_msgs.msg.Bool>(isVehicleStoppedTopic, VehicleStoppedCallback, qosSettings.GetQoSProfile());\n        }\n\n        void VehicleStoppedCallback(std_msgs.msg.Bool msg)\n        {\n            isVehicleStopped = msg.Data;\n        }\n\n        void OnDestroy()\n        {\n            SimulatorROS2Node.RemoveSubscription<std_msgs.msg.Bool>(isVehicleStoppedSubscriber);\n        }\n    }\n}\n
"},{"location":"Components/ROS2/ROS2TopicList/","title":"ROS2 topic list","text":""},{"location":"Components/ROS2/ROS2TopicList/#ros2-topic-list","title":"ROS2 topic list","text":"

The following is a summary of the ROS2 topics that the AWSIM node subscribes to and publishes on.

Ros2ForUnity

AWSIM works with ROS2 thanks to the use of Ros2ForUnity - read the details here. If you want to generate a custom message to allow it to be used in AWSIM please read this tutorial.

"},{"location":"Components/ROS2/ROS2TopicList/#list-of-subscribers","title":"List of subscribers","text":"Category Topic Message type frame_id Hz QoS

Control

Ackermann Control /control/command/control_cmd autoware_auto_control_msgs/AckermannControlCommand - 60 Reliable, TransientLocal, KeepLast/1 Gear /control/command/gear_cmd autoware_auto_vehicle_msgs/GearCommand - 10 Reliable, TransientLocal, KeepLast/1 Turn Indicators /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/TurnIndicatorsCommand - 10 Reliable, TransientLocal, KeepLast/1 Hazard Lights /control/command/hazard_lights_cmd autoware_auto_vehicle_msgs/HazardLightsCommand - 10 Reliable, TransientLocal, KeepLast/1 Emergency /control/command/emergency_cmd tier4_vehicle_msgs/msg/VehicleEmergencyStamped - 60 Reliable, TransientLocal, KeepLast/1"},{"location":"Components/ROS2/ROS2TopicList/#list-of-publishers","title":"List of publishers","text":"Category Topic Message type frame_id Hz QoS

Clock

/clock rosgraph_msgs/Clock - 100 Best effort,Volatile,Keep last/1

Sensors

Camera /sensing/camera/traffic_light/camera_info sensor_msgs/CameraInfo traffic_light_left_camera/camera_link 10 Best effort,Volatile,Keep last/1 Camera /sensing/camera/traffic_light/image_raw sensor_msgs/Image traffic_light_left_camera/camera_link 10 Best effort,Volatile,Keep last/1 GNSS /sensing/gnss/pose geometry_msgs/Pose gnss_link 1 Reliable,Volatile,Keep last/1 GNSS /sensing/gnss/pose_with_covariance geometry_msgs/PoseWithCovarianceStamped gnss_link 1 Reliable,Volatile,Keep last/1 IMU /sensing/imu/tamagawa/imu_raw sensor_msgs/Imu tamagawa/imu_link 30 Reliable,Volatile,Keep last/1000 Top LiDAR /sensing/lidar/top/pointcloud_raw sensor_msgs/PointCloud2 sensor_kit_base_link 10 Best effort,Volatile,Keep last/5 Top LiDAR /sensing/lidar/top/pointcloud_raw_ex sensor_msgs/PointCloud2 sensor_kit_base_link 10 Best effort,Volatile,Keep last/5

Vehicle Status

Velocity /vehicle/status/velocity_status autoware_auto_vehicle_msgs/VelocityReport base_line 30 Reliable,Volatile,Keep last/1 Steering /vehicle/status/steering_status autoware_auto_vehicle_msgs/SteeringReport - 30 Reliable,Volatile,Keep last/1 Control Mode /vehicle/status/control_mode autoware_auto_vehicle_msgs/ControlModeReport - 30 Reliable,Volatile,Keep last/1 Gear /vehicle/status/gear_status autoware_auto_vehicle_msgs/GearReport - 30 Reliable,Volatile,Keep last/1 Turn Indicators /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/TurnIndicatorsReport - 30 Reliable,Volatile,Keep last/1 Hazard Lights /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs/HazardLightsReport - 30 Reliable,Volatile,Keep last/1

Ground Truth

Pose /awsim/ground_truth/vehicle/pose geometry_msgs/PoseStamped base_link 100 Reliable,Volatile,Keep last/1"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/","title":"Preparing the connection between AWSIM and scenario_simulator_v2","text":""},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#preparing-the-connection-between-awsim-and-scenario_simulator_v2","title":"Preparing the connection between AWSIM and scenario_simulator_v2","text":"

This tutorial describes: - how to modify scenario to work with AWSIM - how to prepare the AWSIM scene to work with scenario_simulator_v2

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#scenario-preparation-to-work-with-awsim","title":"Scenario preparation to work with AWSIM","text":"

To prepare the scenario to work with AWSIM add model3d field to entity specification

It is utilized as an asset key to identify the proper prefab.

Match the parameters of the configured vehicle to match the entities parameters in AWSIM as close as it is required. Especially the bounding box is crucial to validate the collisions correctly.

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#default-awsim-asset-catalog","title":"Default AWSIM asset catalog","text":"

AWSIM currently supports the following asset key values.

The list can be extended if required. Appropriate values should be added to asst key list in the ScenarioSimulatorConnector component and the vehicle parameters in scenario simulator should match them.

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#ego-vehicle-entity-with-sensor","title":"Ego Vehicle Entity (with sensor)","text":"model3d boundingbox size (m) wheel base(m) front tread(m) rear tread(m) tier diameter(m) max steer(deg) lexus_rx450h width : 1.920 height : 1.700 length : 4.890 2.105 1.640 1.630 0.766 35"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#npc-vehicle-entity","title":"NPC Vehicle Entity","text":"model3d boundingbox size (m) wheel base(m) front tread(m) rear tread(m) tier diameter(m) max steer(deg) taxi width : 1.695 height : 1.515 length : 4.590 2.680 1.460 1.400 0.635 35 truck_2t width : 1.695 height : 1.960 length : 4.685 2.490 1.395 1.240 0.673 40 hatchback width : 1.695 height 1.515 length : 3.940 2.550 1.480 1.475 0.600 35 van width : 1.880 height : 2.285 length : 4.695 2.570 1.655 1.650 0.600 35 small_car width : 1.475 height 1.800 length : 3.395 2.520 1.305 1.305 0.557 35"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#npc-pedestrian-entity","title":"NPC Pedestrian Entity","text":"model3d boundingbox size (m) human width : 0.400 height : 1.800 length : 0.300"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#misc-object-entity","title":"Misc Object Entity","text":"model3d boundingbox size (m) sign_board width : 0.31 height : 0.58 length : 0.21"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#scenarios-limitations","title":"Scenarios limitations","text":"

Vast majority of features supported by scenario_simulator_v2 are supported with AWSIM as well. Currently supported features are described in the scenario_simulator_v2's documentation.

Features which are not supported when connected with AWSIM are listed below.

  1. Controller properties used by attach_*_sensor - pointcloudPublishingDelay - isClairvoyant - detectedObjectPublishingDelay - detectedObjectPositionStandardDeviation - detectedObjectMissingProbability - randomSeed

If those features are curcial for the scenario's execution, the scenario might not work properly.

"},{"location":"Components/ScenarioSimulation/PreparingTheConnectionBetweenAWSIMAndScenarioSimulator/#awsim-scene-preparation-to-work-with-scenario_simulator_v2","title":"AWSIM scene preparation to work with scenario_simulator_v2","text":"
  1. Disable or remove random traffic and any pre-spawned NPCs
  2. Disable or remove V2I traffic lights publishing
  3. Disable or remove the clock publisher

  4. Add ScenarioSimulatorConnector prefab to the scene - located in Assets/ScenarioSimulatorConnector

  5. Add Ego Follow Camera object - most likely Main Camera

  6. If necessary update the asset_id to prefab mapping - key in the map can be used in the scenario

  7. Add TimeSourceSelector prefab to the scene - located in Assets/AWSIM/Scripts/Clock/Prefabs

  8. Configure Type in the TimeSourceSelector component to SS2

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/","title":"Setup Unity project for scenario simulation","text":""},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#running-awsim-from-unity-editor-with-scenario_simulator_v2","title":"Running AWSIM from Unity Editor with scenario_simulator_v2","text":"

Below you can find instructions on how to setup the scenario execution using scenario_simulator_v2 with AWSIM run from Unity Editor as a simulator The instruction assumes using the Ubuntu OS.

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#prerequisites","title":"Prerequisites","text":"
  1. Build Autoware by following \"Build Autoware with scenario_simulator_v2\" section from the scenario simulator and AWSIM quick start guide

  2. Follow Setup Unity Project tutorial

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#running-the-demo","title":"Running the demo","text":"
  1. Open AutowareSimulationScenarioSimulator.unity scene placed under Assets/AWSIM/Scenes/Main directory
  2. Run the simulation by clicking Play button placed at the top section of Editor.
  3. Launch scenario_test_runner.

    source install/setup.bash\nros2 launch scenario_test_runner scenario_test_runner.launch.py                        \\\narchitecture_type:=awf/universe  record:=false                                         \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim.yaml'          \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                          \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\" \\\ninitialize_duration:=260 port:=8080\n

"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#other-sample-scenarios","title":"Other sample scenarios","text":""},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#conventional-traffic-lights-demo","title":"Conventional traffic lights demo","text":"

This scenario controls traffic signals in the scene based on OpenSCENARIO. It can be used to verify whether traffic light recognition pipeline works well in Autoware.

ros2 launch scenario_test_runner scenario_test_runner.launch.py                                           \\\narchitecture_type:=awf/universe  record:=false                                                            \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim_conventional_traffic_lights.yaml' \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                                             \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\"                    \\\ninitialize_duration:=260 port:=8080\n
"},{"location":"Components/ScenarioSimulation/SetupUnityProjectForScenarioSimulation/#v2i-traffic-lights-demo","title":"V2I traffic lights demo","text":"

This scenario publishes V2I traffic signals information based on OpenSCENARIO. It can be used to verify Autoware responds to V2I traffic lights information correctly.

ros2 launch scenario_test_runner scenario_test_runner.launch.py                                  \\\narchitecture_type:=awf/universe  record:=false                                                   \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim_v2i_traffic_lights.yaml' \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                                    \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\"           \\\ninitialize_duration:=260 port:=8080\n
"},{"location":"Components/Sensors/CameraSensor/","title":"Camera Sensor","text":""},{"location":"Components/Sensors/CameraSensor/#camerasensor","title":"CameraSensor","text":""},{"location":"Components/Sensors/CameraSensor/#introduction","title":"Introduction","text":"

CameraSensor is a component that simulates an RGB camera. Autonomous vehicles can be equipped with many cameras used for various purposes. In the current version of AWSIM, the camera is used primarily to provide the image to the traffic light recognition module in Autoware.

"},{"location":"Components/Sensors/CameraSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/CameraSensor.prefab\n
"},{"location":"Components/Sensors/CameraSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

The mentioned single CameraSensor has its own frame traffic_light_left_camera/camera_link in which its data is published. The sensor prefab is added to this frame. The traffic_light_left_camera/camera_link link is added to the base_link object located in the URDF.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/CameraSensor/#camerasensorholder-script","title":"CameraSensorHolder (script)","text":"

CameraSensorHolder (script) allows the sequential rendering of multiple camera sensors. To utilize it, each CameraSensor object should be attached as a child object of the CameraSensorHolder.

"},{"location":"Components/Sensors/CameraSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/CameraSensor/#camerasensor-components","title":"CameraSensor Components","text":"

For the CameraSensor to work properly, the GameObject to which the scripts are added must also have:

TrafficLights recognition

In case of problems with the recognition of traffic lights in Autoware, it may help to increase the image resolution and focal length of the camera in AWSIM.

Camera settings

If you would like to adjust the image captured by the camera, we encourage you to read this manual.

The CameraSensor functionality is split into two scripts:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/Sensors/CameraSensor/*\n

In the same location there are also *.compute files containing used ComputeShaders.

"},{"location":"Components/Sensors/CameraSensor/#camerasensor-script","title":"CameraSensor (script)","text":"

Camera Sensor (script) is a core camera sensor component. It is responsible for applying OpenCV distortion and encoding to BGR8 format. The distortion model is assumed to be Plumb Bob. The script renders the image from the camera to Texture2D and transforms it using the distortion parameters. This image is displayed in the GUI and further processed to obtain the list of bytes in BGR8 format on the script output.

The script uses two ComputeShaders, they are located in the same location as the scripts:

API type feature DoRender void Renders the Unity camera, applies OpenCV distortion to rendered image and update output data."},{"location":"Components/Sensors/CameraSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/CameraSensor/#output-data","title":"Output Data","text":"

The sensor computation output format is presented below:

Category Type Description ImageDataBuffer byte[ ] Buffer with image data. CameraParameters CameraParameters Set of the camera parameters."},{"location":"Components/Sensors/CameraSensor/#cameraros2publisher-script","title":"CameraRos2Publisher (script)","text":"

Converts the data output from CameraSensor to ROS2 Image and CameraInfo type messages and publishes them. The conversion and publication is performed using the Publish(CameraSensor.OutputData outputData) method, which is the callback triggered by Camera Sensor (script) for the current output.

Due to the fact that the entire image is always published, the ROI field of the message is always filled with zeros. The script also ensures that binning is assumed to be zero and the rectification matrix is the identity matrix.

Warning

The script uses the camera parameters set in the CameraSensor script - remember to configure them depending on the camera you are using.

"},{"location":"Components/Sensors/CameraSensor/#elements-configurable-from-the-editor-level_2","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/CameraSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id Camera info /sensing/camera/traffic_light/camera_info sensor_msgs/CameraInfo traffic_light_left_camera/camera_link Camera image /sensing/camera/traffic_light/image_raw sensor_msgs/Image traffic_light_left_camera/camera_link"},{"location":"Components/Sensors/GNSSSensor/","title":"GNSS Sensor","text":""},{"location":"Components/Sensors/GNSSSensor/#gnsssensor","title":"GnssSensor","text":""},{"location":"Components/Sensors/GNSSSensor/#introduction","title":"Introduction","text":"

GnssSensor is a component which simulates the position of vehicle computed by the Global Navigation Satellite System based on the transformation of the GameObject to which this component is attached. The GnssSensor outputs the position in the MGRS coordinate system.

"},{"location":"Components/Sensors/GNSSSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/GnssSensor.prefab\n
"},{"location":"Components/Sensors/GNSSSensor/#link","title":"Link","text":"

GnssSensor has its own frame gnss_link in which its data is published. The sensor prefab is added to this frame. The gnss_link frame is added to the sensor_kit_base_link in the base_link object located in the URDF.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/GNSSSensor/#components","title":"Components","text":"

The GnssSensor functionality is split into two components:

Scripts can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/Gnss/*\n
"},{"location":"Components/Sensors/GNSSSensor/#gnss-sensor-script","title":"Gnss Sensor (script)","text":"

This is the main script in which all calculations are performed:

  1. the position of the Object in Unity is read,
  2. this position is transformed to the ROS2 coordinate system (MGRS offset is added here),
  3. the result of the transformation is saved as the output of the component,
  4. for the current output a callback is called (which can be assigned externally).
"},{"location":"Components/Sensors/GNSSSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/GNSSSensor/#output-data","title":"Output Data","text":"Category Type Description Position Vector3 Position in the MGRS coordinate system."},{"location":"Components/Sensors/GNSSSensor/#gnss-ros2-publisher-script","title":"Gnss Ros2 Publisher (script)","text":"

Converts the data output from GnssSensor to ROS2 PoseStamped and PoseWithCovarianceStamped messages. These messages are published on two separate topics for each type. The conversion and publication is performed using the Publish(GnssSensor.OutputData outputData) method, which is the callback triggered by Gnss Sensor (script) for the current output update.

Covariance matrix

The row-major representation of the 6x6 covariance matrix is filled with 0 and does not change during the script run.

"},{"location":"Components/Sensors/GNSSSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/GNSSSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id Pose /sensing/gnss/pose geometry_msgs/Pose gnss_link Pose with Covariance /sensing/gnss/pose_with_covariance geometry_msgs/PoseWithCovarianceStamped gnss_link"},{"location":"Components/Sensors/IMUSensor/","title":"IMU Sensor","text":""},{"location":"Components/Sensors/IMUSensor/#imusensor","title":"IMUSensor","text":""},{"location":"Components/Sensors/IMUSensor/#introduction","title":"Introduction","text":"

IMUSensor is a component that simulates an IMU (Inertial Measurement Unit) sensor. Measures acceleration (\\({m}/{s^2}\\)) and angular velocity (\\({rad}/{s}\\)) based on the transformation of the GameObject to which this component is attached.

"},{"location":"Components/Sensors/IMUSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/IMUSensor.prefab\n
"},{"location":"Components/Sensors/IMUSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

IMUSensor has its own frame tamagawa/imu_link in which its data is published. The sensor prefab is added to this frame. The tamagawa/imu_link link is added to the sensor_kit_base_link in the base_link object located in the URDF.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/IMUSensor/#components","title":"Components","text":"

The IMUSensor functionality is split into two scripts:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/Sensors/Imu/*\n
"},{"location":"Components/Sensors/IMUSensor/#imu-sensor-script","title":"IMU Sensor (script)","text":"

This is the main script in which all calculations are performed:

Warning

If the angular velocity about any axis is NaN (infinite), then angular velocity is published as vector zero.

"},{"location":"Components/Sensors/IMUSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/IMUSensor/#output-data","title":"Output Data","text":"Category Type Description LinearAcceleration Vector3 Measured acceleration (m/s^2) AngularVelocity Vector3 Measured angular velocity (rad/s)"},{"location":"Components/Sensors/IMUSensor/#imu-ros2-publisher-script","title":"Imu Ros2 Publisher (script)","text":"

Converts the data output from IMUSensor to ROS2 Imu type message and publishes it. The conversion and publication is performed using the Publish(IMUSensor.OutputData outputData) method, which is the callback triggered by IMU Sensor (script) for the current output.

Warning

In each 3x3 covariance matrices the row-major representation is filled with 0 and does not change during the script run. In addition, the field orientation is assumed to be {1,0,0,0} and also does not change.

"},{"location":"Components/Sensors/IMUSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/IMUSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id IMU data /sensing/imu/tamagawa/imu_raw sensor_msgs/Imu tamagawa/imu_link"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/","title":"Add New LiDAR","text":""},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#add-a-new-lidar","title":"Add a new LiDAR","text":"

RGLUnityPlugin (RGL) comes with a number of the most popular LiDARs model definitions and ready-to-use prefabs. However, there is a way to create your custom LiDAR. This section describes how to add a new LiDAR model that works with RGL, then create a prefab for it and add it to the scene.

Supported LiDARs

Not all lidar types are supported by RGL. Unfortunately, in the case of MEMs LiDARs, there is a non-repetitive phenomenon - for this reason, the current implementation is not able to reproduce their work.

"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#1-add-a-new-lidar-model","title":"1. Add a new LiDAR model","text":"

The example shows the addition of a LiDAR named NewLidarModel.

To add a new LiDAR model, perform the following steps:

  1. Navigate to Assets/RGLUnityPlugin/Scripts/LidarModels.

  2. Add its name to the LidarModels.cs at the end of the enumeration. The order of enums must not be changed to keep existing prefabs working.

  3. Now, it is time to define the laser (also called a channel) distribution of the LiDAR.

    !!! info If your LiDAR:

          - has a uniform laser distribution\n      - has the equal range for all of the lasers\n      - fire all of the rays (beams) at the same time\n\n  You can skip this step and use our helper method to generate a simple uniform laser array definition (more information in the next step).\n

    1. Laser distribution is represented by LaserArray consists of: - centerOfMeasurementLinearOffsetMm - 3D translation from the game object's origin to LiDAR's origin. Preview in 2D:

          <img src=\"img/LidarOriginParameter.png\" width=\"300\">\n\n  - `focalDistanceMm` - Distance from the sensor center to the focal point where all laser beams intersect.\n\n      <img src=\"img/LidarFocalDistanceParamter.png\" width=\"300\">\n\n  - `lasers` - array of lasers (channels) with a number of parameters:\n\n      - `horizontalAngularOffsetDeg` - horizontal angle offset of the laser (Azimuth)\n      - `verticalAngularOffsetDeg` - vertical angle offset of the laser (Elevation)\n      - `verticalLinearOffsetMm` - vertical offset of the laser (translation from origin)\n      - `ringId` - Id of the ring (in most cases laser Id)\n      - `timeOffset` - time offset of the laser firing in milliseconds (with reference to the first laser in the array)\n      - `minRange` - minimum range of the laser (set if lasers have different ranges)\n      - `maxRange` - maximum range of the laser (set if lasers have different ranges)\n

    1. To define a new laser distribution create a new class in the LaserArrayLibrary.cs

      ![lidar_array](img/LidarLaserArray.png)\n\n  - Add a new public static instance of `LaserArray` with the definition.\n\n  In this example, `NewLidarModel` laser distribution consists of 5 lasers with\n\n      - elevations: 15, 10, 0, -10, -15 degrees\n      - azimuths: 1.4, -1.4, 1.4, -1.4, 1.4 degrees\n      - ring Ids: 1, 2, 3, 4, 5\n      - time offsets: 0, 0.01, 0.02, 0.03, 0.04 milliseconds\n      - an equal range that will be defined later\n\n  !!! warning \"Coordinate system\"\n      Keep in mind that *Unity* has a left-handed coordinate system, while most of the *LiDAR's* manuals use a right-handed coordinate system. In that case, reverse sign of the values of the angles.\n
  4. The last step is to create a LiDAR configuration by adding an entry to LidarConfigurationLibrary.cs

    Add a new item to the ByModel dictionary that collects LiDAR model enumerations with their BaseLidarConfiguration choosing one of the implementations:

    - UniformRangeLidarConfiguration - lidar configuration for uniformly distributed rays along the horizontal axis with a uniform range for all the rays (it contains minRange and maxRange parameters additionally) - LaserBasedRangeLidarConfiguration - lidar configuration for uniformly distributed rays along the horizontal axis with ranges retrieved from lasers description - Or create your custom implementations in LidarConfiguration.cs like: - HesaiAT128LidarConfiguration - HesaiQT128C2XLidarConfiguration - HesaiPandar128E4XLidarConfiguration

    !!! note \"Lidar configuration parameters descrition\" Please refer to this section for the detailed description of all configuration parameters.

  5. Done. New LiDAR preset should be available via Unity Inspector.

    Frame rate of the LiDAR can be set in the Automatic Capture Hz parameter.

    Note: In the real-world LiDARs, frame rate affects horizontal resolution. Current implementation separates these two parameters. Keep in mind to change it manually.

"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#2-create-new-lidar-prefab","title":"2. Create new LiDAR prefab","text":"
  1. Create an empty object and name it appropriately according to the LiDAR model.
  2. Attach script LidarSensor.cs to created object.
  3. Set the new added LiDAR model in Model Preset field, check if the configuration loads correctly. You can now customize it however you like.
  4. (Optional) Attach script PointCloudVisualization.cs for visualization purposes.
  5. For publishing point cloud via ROS2 attach script RglLidarPublisher.cs script to created object.
  6. Set the topics on which you want the data to be published and their frame.
  7. Save the prefab in the project.
"},{"location":"Components/Sensors/LiDARSensor/AddNewLiDAR/#3-test-your-prefab","title":"3. Test your prefab","text":"
  1. Create a new scene (remember to add the SceneManager) or use one of the existing sample scenes.
  2. Add the prepared LiDAR prefab by drag the prefab file and drop it into a scene.

  3. A LiDAR GameObject should be instantiated automatically

  4. Now you can run the scene and check how your LiDAR works.

Success

We encourage you to develop a vehicle using the new LiDAR you have added - learn how to do this here.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/","title":"LiDAR Sensor","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#lidarsensor","title":"LidarSensor","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#introduction","title":"Introduction","text":"

LidarSensor is the component that simulates the LiDAR (Light Detection and Ranging) sensor. LiDAR works by emitting laser beams that bounce off objects in the environment, and then measuring the time it takes for the reflected beams to return, allowing the sensor to create a 3D map of the surroundings. This data is used for object detection, localization, and mapping.

LiDAR in an autonomous vehicle can be used for many purposes. The ones mounted on the top of autonomous vehicles are primarily used

LiDARs placed on the left and right sides of the vehicle are mainly used to monitor the traffic lane and detect vehicles moving in adjacent lanes, enabling safe maneuvers such as lane changing or turning.

LidarSensor component is a part of RGLUnityPlugin that integrates the external RobotecGPULidar (RGL) library with Unity. RGL also allows to provide additional information about objects, more about it here.

Use RGL in your scene

If you want to use RGL in your scene, make sure the scene has an SceneManager component added and all objects meet the usage requirements.

RGL default scenes

If you would like to see how LidarSensor works using RGL or run some tests, we encourage you to familiarize yourself with the RGL test scenes section.

Supported LiDARs

The current scripts implementation allows you to configure the prefab for any mechanical LiDAR. You can read about how to do it here. MEMS-based LiDARs due to their different design are not yet fully supported.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#prefabs","title":"Prefabs","text":"

Prefabs can be found under the following path:

Assets/AWSIM/Prefabs/RobotecGPULidars/*\n

The table of available prefabs can be found below:

LiDAR Path Appearance HESAI Pandar40P HesaiPandar40P.prefab HESAI PandarQT64 HesaiPandarQT64.prefab HESAI PandarXT32 HesaiPandarXT32.prefab HESAI QT128C2X HesaiQT128C2X.prefab HESAI Pandar128E4X HesaiPandar128E4X.prefab HESAI AT128 E2X HesaiAT128E2X.prefab Ouster OS1-64 OusterOS1-64.prefab Velodyne VLP-16 VelodyneVLP16.prefab Velodyne VLC-32C VelodyneVLP32C.prefab Velodyne VLS-128-AP VelodyneVLS128.prefab"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

LidarSensor is configured in default vehicle EgoVehicle prefab. It is added to URDF object as a child of sensor_kit_base_link. LidarSensor placed in this way does not have its own frame, and the data is published relative to sensor_kit_base_link. More details about the location of the sensors in the vehicle can be found here.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

Additional LiDARs

For a LiDAR placed on the left side, right side or rear, an additional link should be defined.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#components-and-resources","title":"Components and Resources","text":"

The LiDAR sensor simulation functionality is split into three components:

Moreover, the scripts use Resources to provide configuration for prefabs of supported lidar models:

These are elements of the RGLUnityPlugin, you can read more here.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#lidar-sensor-script","title":"Lidar Sensor (script)","text":"

This is the main component that creates the RGL node pipeline for the LiDAR simulation. The pipeline consists of:

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#output-data","title":"Output Data","text":"

LidarSensor provides public methods to extend this pipeline with additional RGL nodes. In this way, other components can request point cloud processing operations and receive data in the desired format.

Example of how to get XYZ point cloud data:

  1. To obtain point cloud data from another component you have to create a new RGLNodeSequence with RGL node to yield XYZ field and connect it to LidarSensor:
    rglOutSubgraph = new RGLNodeSequence().AddNodePointsYield(\"OUT_XYZ\", RGLField.XYZ_F32);\nlidarSensor = GetComponent<LidarSensor>();\nlidarSensor.ConnectToWorldFrame(rglOutSubgraph); // you can also connect to Lidar frame using ConnectToLidarFrame\n// You can add a callback to receive a notification when new data is ready\nlidarSensor.onNewData += HandleLidarDataMethod;\n
  2. To get data from RGLNodeSequence call GetResultData:
    Vector3[] xyz = new Vector3[0];\nrglOutSubgraph.GetResultData<Vector3>(ref xyz);\n
"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#rgl-lidar-publisher-script","title":"Rgl Lidar Publisher (script)","text":"

RglLidarPublisher extends the main RGL pipeline created in LidarSensor with RGL nodes that produce point clouds in specific format and publish them to the ROS2 topic. Thanks to the ROS2 integration with RGL, point clouds can be published directly from the native library. RGL creates ROS2 node named /RobotecGPULidar with publishers generated by RGL nodes.

Currently, RglLidarPublisher implements two ROS2 publishers:

Details on the construction of these formats are available in the PointCloudFormats under the following path:

Assets/AWSIM/Scripts/Sensors/LiDAR/PointCloudFormats.cs\n

rosPCL48 format

For a better understanding of the rosPCL48 format, we encourage you to familiarize yourself with the point cloud pre-processing process in Autoware, which is described here.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#published-topics","title":"Published Topics","text":" Category Topic Message type frame_id PointCloud 24-byte format /lidar/pointcloud sensor_msgs/PointCloud2 world PointCloud 48-byte format /lidar/pointcloud_ex sensor_msgs/PointCloud2 world"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#point-cloud-visualization-script","title":"Point Cloud Visualization (script)","text":"

A component visualizing a point cloud obtained from RGL in the form of a Vector3 list as colored points in the Unity scene. Based on the defined color table, it colors the points depending on the height at which they are located.

The obtained points are displayed as the vertices of mesh, and their coloring is possible thanks to the use of PointCloudMaterial material which can be found in the following path:

Assets/RGLUnityPlugin/Resources/PointCloudMaterial.mat\n

Point Cloud Visualization preview:

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#elements-configurable-from-the-editor-level_2","title":"Elements configurable from the editor level","text":""},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#read-material-information","title":"Read material information","text":"

To ensure the publication of the information described in this section, GameObjects must be adjusted accordingly. This tutorial describes how to do it.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#intensity-texture","title":"Intensity Texture","text":"

RGL Unity Plugin allows assigning an Intensity Texture to the GameObjects to produce a point cloud containing information about the lidar ray intensity of hit. It can be used to distinguish different levels of an object's reflectivity.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#output-data_1","title":"Output data","text":"

Point cloud containing intensity is published on the ROS2 topic via RglLidarPublisher component. The intensity value is stored in the intensity field of the sensor_msgs/PointCloud2 message.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#instance-segmentation","title":"Instance segmentation","text":"

RGL Unity Plugin allows assigning an ID to GameObjects to produce a point cloud containing information about hit objects. It can be used for instance/semantic segmentation tasks. This tutorial describes how to do it.

LidarInstanceSegmentationDemo

If you would like to see how LidarInstanceSegmentationDemo works using RGL or run some tests, we encourage you to familiarize yourself with this section.

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#output-data_2","title":"Output data","text":"

Point cloud containing hit objects IDs is published on the ROS2 topic via RglLidarPublisher component. It is disabled by default. Properties related to this feature are marked below:

"},{"location":"Components/Sensors/LiDARSensor/LiDARSensor/#dictionary-mapping","title":"Dictionary mapping","text":"

The resulting simulation data contains only the id of objects without their human-readable names. To facilitate the interpretation of such data, a function has been implemented to save a file with a dictionary mapping instance ID to GameObject names. It writes pairs of values in the yaml format:

To enable saving dictionary mapping set output file path to the Semantic Category Dictionary File property in the Scene Manager component:

The dictionary mapping file will be saved at the end of the simulation.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/","title":"RGLUnityPlugin","text":""},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#rglunityplugin","title":"RGLUnityPlugin","text":"

Robotec GPU Lidar (RGL) is an open source high performance lidar simulator running on CUDA-enabled GPUs. It is a cross-platform solution compatible with both Windows and Linux operating systems. RGL utilizes RTX cores for acceleration, whenever they are accessible.

RGL is used in AWSIM for performance reasons. Thanks to it, it is possible to perform a large number of calculations using the GPU, which is extremely helpful due to the size of the scenes. AWSIM is integrated with RGL out-of-the-box - using RGLUnityPlugin asset.

Warning

If you want to use RGL in your scene, make sure the scene has an RGLSceneManager component added and all objects meet the usage requirements.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#concept","title":"Concept","text":"

Describing the concept of using RGL in AWSIM, we distinguish:

Producing a point cloud is based on the use of a Scene containing Entities with Meshes, and placing an Ego Entity with LiDAR sensor that creates a Graph describing ray pattern and performing raytracing. In subsequent frames of the simulation, SceneManager synchronizes the scene between Unity and RGL, and LiDAR sensor updates rays pose on the scene and triggers Graph to perform raytracing and format desired output.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#package-structure","title":"Package structure","text":"

RGLUnityPlugin asset contains:

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#scripts","title":"Scripts","text":""},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#scenemanager","title":"SceneManager","text":"

Each scene needs SceneManager component to synchronize models between Unity and RGL. On every frame, it detects changes in the Unity's scene and propagates the changes to native RGL code. When necessary, it obtains 3D models from GameObjects on the scene, and when they are no longer needed, it removes them.

Three different strategies to interact with in-simulation 3D models are implemented. SceneManager uses one of the following policies to construct the scene in RGL:

Mesh Source Strategy Static Entity Animated Entity (NPC) Only Colliders Collider Collider Regular Meshes And Colliders Instead Of Skinned Regular Mesh Collider Regular Meshes And Skinned Meshes Regular Mesh Regular Mesh

Mesh source can be changed in the SceneManager script properties:

Performance

SceneManager performance depends on mesh source option selected.

"},{"location":"Components/Sensors/LiDARSensor/RGLUnityPlugin/#usage-requirements","title":"Usage requirements","text":"

Objects, to be detectable by RGL, must fulfill the following requirements:

  1. Contain one of the components: Collider, Mesh Renderer, or Skinned Mesh Renderer - it depends on SceneManager mesh source parameter.
  2. Be readable from CPU-accessible memory - it can be achieved using the Read/Write Enabled checkbox in mesh settings.

    !!! note \"Readable objects\" Primitive Objects are readable by default.

    !!! example The activated Readable option in the mesh should look like this.

      <img src=\"readable.png\" width=\"75%\">\n
"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/","title":"Read Material Information","text":"

RGL Unity Plugin allows to:

"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/#add-intensity-texture-assignment","title":"Add Intensity Texture assignment","text":"

To enable reading material information, add IntensityTexture component to every GameObject that is expected to have non-default intensity values.

After that desired texture has to be inserted into the Intensity Texture slot.

The texture has to be in R8 format. That means 8bit in the red channel (255 possible values).

When the texture is assigned, the intensity values will be read from the texture and added to the point cloud if and only if the mesh component in the GameObject has a set of properly created texture coordinates.

The expected number of texture coordinates is equal to the number of vertices in the mesh. The quantity of indices is not relevant. In other cases, the texture will be no read properly.

"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/#add-id-assignment","title":"Add ID assignment","text":"

To enable segmentation, add SemanticCategory component to every GameObject that is expected to have a distinct ID. All meshes that belong to a given object will inherit its ID. ID inheritance mechanism allows IDs to be overwritten for individual meshes/objects. This solution also enables the creation of coarse categories (e.g., Pedestrians, Vehicles)

Example

SemanticCategory component is assigned to the Taxi GameObject. All meshes in the Taxi GameObject will have the same instance ID as Taxi:*

Example

The driver has its own SemanticCategory component, so his instance ID will differ from the rest of the meshes:

Example

SemanticCategory component is assigned to the Vehicles GameObject that contains all of the cars on the scene:

"},{"location":"Components/Sensors/LiDARSensor/ReadMaterialInformation/#dictionary-mapping","title":"Dictionary mapping","text":"

The resulting simulation data contains only the id of objects without their human-readable names. To facilitate the interpretation of such data, a function has been implemented to save a file with a dictionary mapping instance ID to GameObject names. It writes pairs of values in the yaml format:

To enable saving dictionary mapping set output file path to the Semantic Category Dictionary File property in the Scene Manager component:

The dictionary mapping file will be saved at the end of the simulation.

"},{"location":"Components/Sensors/VehicleStatusSensor/","title":"Vehicle Status Sensor","text":""},{"location":"Components/Sensors/VehicleStatusSensor/#vehiclestatussensor","title":"VehicleStatusSensor","text":""},{"location":"Components/Sensors/VehicleStatusSensor/#introduction","title":"Introduction","text":"

VehicleStatusSensor is a component that is designed to aggregate information about the current state of the vehicle. It aggregates information about:

"},{"location":"Components/Sensors/VehicleStatusSensor/#prefab","title":"Prefab","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/VehicleStatusSensor.prefab\n
"},{"location":"Components/Sensors/VehicleStatusSensor/#link-in-the-default-scene","title":"Link in the default Scene","text":"

This sensor is added directly to the URDF link in the EgoVehicle prefab.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Sensors/VehicleStatusSensor/#components","title":"Components","text":"

All features are implemented within the Vehicle Report Ros2 Publisher (script) which can be found under the following path:

Assets/AWSIM/Prefabs/Sensors/*\n
"},{"location":"Components/Sensors/VehicleStatusSensor/#vehicle-report-ros2-publisher-script","title":"Vehicle Report Ros2 Publisher (script)","text":"

The script is responsible for updating and publishing each of the aggregated data on a separate topic. Therefore, it has 6 publishers publishing the appropriate type of message with a constant frequency - one common for all data.

"},{"location":"Components/Sensors/VehicleStatusSensor/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":"

Vehicle configuration

An important element of the script configuration that must be set is the scene Object (Vehicle). It will be used for reading all the data needed. The appropriate EgoVehicle object should be selected.

If you can't select the right object, make sure it's set up correctly - it has got added all the scripts needed for EgoVehicle.

"},{"location":"Components/Sensors/VehicleStatusSensor/#published-topics","title":"Published topics","text":" Category Topic Message type frame_id Control mode /vehicle/status/control_mode autoware_auto_vehicle_msgs/ControlModeReport - Gear status /vehicle/status/gear_status autoware_auto_vehicle_msgs/GearReport - Steering status /vehicle/status/steering_status autoware_auto_vehicle_msgs/SteeringReport - Turn indicators status /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/TurnIndicatorsReport - Hazard lights status /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs/HazardLightsReport - Velocity status /vehicle/status/velocity_status autoware_auto_vehicle_msgs/VelocityReport base_line"},{"location":"Components/Traffic/NPCs/Pedestrian/","title":"Pedestrian","text":""},{"location":"Components/Traffic/NPCs/Pedestrian/#introduction","title":"Introduction","text":"

NPCPedestrian is an object that simulates a human standing or moving on the scene. It can move cyclically in any chosen place thanks to the available scripts. Traffic light tracking will be implemented in the future.

Sample scene

If you would like to see how NPCPedestrian works or run some tests, we encourage you to familiarize yourself with the NPCPedestrianSample default scene described in this section.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#prefab-and-fbx","title":"Prefab and Fbx","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/NPCs/Pedestrians/humanElegant.prefab\n
"},{"location":"Components/Traffic/NPCs/Pedestrian/#visual-elements","title":"Visual elements","text":"

Prefab is developed using models available in the form of *.fbx file. From this file, the visual elements of the model, Animator and LOD were loaded. The Animator and LOD are added as components of the main-parent GameObject in prefab, while the visual elements of the model are added as its children.

*.fbx file can be found under the following path:

Assets/AWSIM/Models/NPCs/Pedestrians/Human/humanElegant.fbx\n

NPCPedestrian prefab has the following content:

The ReferencePoint is used by the NPC Pedestrian (script) described here.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#link-in-the-default-scene","title":"Link in the default Scene","text":"

Pedestrians implemented in the scene are usually added in one aggregating object - in this case it is NPCPedestrians. This object is added to the Environment prefab.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#components","title":"Components","text":"

There are several components responsible for the full functionality of NPCPedestrian:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/NPCs/Pedestrians/*\n
"},{"location":"Components/Traffic/NPCs/Pedestrian/#rigidbody","title":"Rigidbody","text":"

Rigidbody ensures that the object is controlled by the physics engine. In order to connect the animation to the object, the Is Kinematic option must be enabled. By setting Is Kinematic, each NPCPedestrian object will have no physical interaction with other objects - it will not react to a vehicle that hits it. The Use Gravity should be turned off - the correct position of the pedestrian in relation to the ground is ensured by the NPC Pedestrian (script). In addition, Interpolate should be turned on to ensure the physics engine's effects are smoothed out.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#lod-level-of-detail","title":"LOD (Level of Detail)","text":"

LOD provides dependence of the level of detail of the object depending on the ratio of the GameObject\u2019s screen space height to the total screen height. The pedestrian model has two object groups: suffixed LOD0 and LOD1. LOD0 objects are much more detailed than LOD1 - they have many more vertices in the Meshes. Displaying complex meshes requires more performance, so if the GameObject is a small part of the screen, less complex LOD1 objects are used.

In the case of the NPCPedestrian prefab, if its object is less than 25% of the height of the screen then objects with the LOD1 suffix are used. For values less than 1% the object is culled.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#animator","title":"Animator","text":"

Animator component provides animation assignments to a GameObject in the scene. It uses a developed Controller which defines which animation clips to use and controls when and how to blend and transition between them.

The AnimationController for humans should have the two float parameters for proper transitions. Transitions between animation clips are made depending on the values of these parameters:

Developed controller can be found in the following path: Assets/AWSIM/Models/NPCs/Pedestrians/Human/Human.controller

Walking to running transition

The example shows the state of walking and then transitions to running as a result of exceeding the condition \\(\\mathrm{moveSpeed} > 1.6\\)

"},{"location":"Components/Traffic/NPCs/Pedestrian/#npc-pedestrian-script","title":"NPC Pedestrian (script)","text":"

The script takes the Rigidbody and Animator components and combines them in such a way that the actual animation depends on the movement of Rigidbody. It provides an inputs that allows the pedestrian to move - change his position and orientation. In addition, the ReferencePoint point is used to ensure that the pedestrian follows the ground plane correctly.

"},{"location":"Components/Traffic/NPCs/Pedestrian/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Traffic/NPCs/Pedestrian/#input-data","title":"Input Data","text":"Category Type Description SetPosition Vector3 Move the NPCPedestrian so that the reference point is at the specified coordinates. SetRotation Vector3 Rotate the NPCPedestrian so that the orientation of the reference point becomes the specified one."},{"location":"Components/Traffic/NPCs/Pedestrian/#simple-pedestrian-walker-controller-script","title":"Simple Pedestrian Walker Controller (script)","text":"

Simple Pedestrian Walker Controller is a script that allows the pedestrian to cyclically move back and forth along a straight line. One-way motion is performed with a fixed time as parameter Duration and a constant linear velocity as parameter Speed. The script obviously uses the NPCPedestrian controls provided by the NPC Pedestrian (script) inputs.

Pedestrian walking on the sidewalk

"},{"location":"Components/Traffic/NPCs/Pedestrian/#collider","title":"Collider","text":"

Collider is an optional pedestrian component. By default, NPCPedestrian doesn't have this component added, It can be added if you want to detect a collision, e.g. with an EgoVehicle. There are several types of colliders, choose the right one and configure it for your own requirements.

Capsule Collider

An example of a CapsuleCollider that covers almost the entire pedestrian.

"},{"location":"Components/Traffic/NPCs/Vehicle/","title":"Vehicle","text":""},{"location":"Components/Traffic/NPCs/Vehicle/#npcvehicle","title":"NPCVehicle","text":""},{"location":"Components/Traffic/NPCs/Vehicle/#introduction","title":"Introduction","text":"

NPCVehicle is a non-playable object that simulates a vehicle that is stationary or moving around the scene. It can move on roads, more specifically TrafficLanes, thanks to the use of TrafficSimulator - which you can read more about here. Vehicles moving on the scene take into account each other - avoiding collisions, follow traffic lights and have an implemented mechanism of yielding the right of way.

Sample scene

If you would like to see how NPCVehicle works or run some tests, we encourage you to familiarize yourself with the NPCVehicleSample default scene described in this section.

Ego Vehicle

If you are interested in the most important vehicle on the scene - Ego Vehicle, we encourage you to read this section.

"},{"location":"Components/Traffic/NPCs/Vehicle/#prefabs-and-fbxs","title":"Prefabs and Fbxs","text":"

Prefabs can be found under the following path:

Assets/AWSIM/Prefabs/NPCs/Vehicles/*\n

The table shows the available prefabs of the vehicles:

Hatchback SmallCar Taxi Truck Van Appearance Prefab Hatchback.prefab SmallCar.prefab Taxi-64.prefab Truck_2t.prefab Van.prefab

NPCVehicle prefab has the following content:

As you can see, it consists of 2 parents for GameObjects: Visuals - aggregating visual elements, Colliders - aggregating colliders and single object CoM. All objects are described in the sections below.

"},{"location":"Components/Traffic/NPCs/Vehicle/#visual-elements","title":"Visual elements","text":"

Prefabs are developed using models available in the form of *.fbx files. For each vehicle, the visuals elements and LOD were loaded from the appropriate *.fbx file. The LOD is always added as components of the main-parent GameObject in prefab, while the visual elements of the model are aggregated and added in object Visuals.

*.fbx file for each vehicle is located in the appropriate Models directory for the vehicle under the following path:

Assets/AWSIM/Models/NPCs/Vehicles/<vehicle_name>/Models/<vehicle_name>.fbx\n

As you can see, the additional visual element is Driver.

It was also loaded from the *.fbx file which can be found under the following path:

Assets/AWSIM/Models/NPCs/Vehicles/Driver/Model/Driver.fbx\n

Vehicle fbx

The content of a sample *.fbx file is presented below, all elements except Collider have been added to the prefab as visual elements of the vehicle. Collider is used as the Mesh source for the Mesh Collider in the BodyCollider object.

.

"},{"location":"Components/Traffic/NPCs/Vehicle/#link","title":"Link","text":"

The default scene does not have vehicles implemented in fixed places, but they are spawned by RandomTrafficSimulator which is located in the Environment prefab. Therefore, before starting the simulation, no NPCVehicle object is on the scene.

When you run the simulation, you can see objects appearing as children of RandomTrafficSimulator:

In each NPCVehicle prefab, the local coordinate system of the vehicle (main prefab link) should be defined in the axis of the rear wheels projected onto the ground - in the middle of the distance between them. This aspect holds significance when characterizing the dynamics of the object, as it provides convenience in terms of describing its motion and control.

"},{"location":"Components/Traffic/NPCs/Vehicle/#components","title":"Components","text":"

There are several components responsible for the full functionality of NPCVehicle:

Script can be found under the following path:

Assets/AWSIM/Scripts/NPCs/Vehicles\n
"},{"location":"Components/Traffic/NPCs/Vehicle/#com","title":"CoM","text":"

CoM (Center of Mass) is an additional link that is defined to set the center of mass in the Rigidbody. The NPC Vehicle (script) is responsible for its assignment. This measure should be defined in accordance with reality. Most often, the center of mass of the vehicle is located in its center, at the height of its wheel axis - as shown below.

"},{"location":"Components/Traffic/NPCs/Vehicle/#colliders","title":"Colliders","text":"

Colliders are used to ensure collision between objects. In NPCVehicle, the main BodyCollider collider and Wheels Colliders colliders for each wheel were added.

"},{"location":"Components/Traffic/NPCs/Vehicle/#body-collider","title":"Body Collider","text":"

BodyCollider is a vehicle Object responsible for ensuring collision with other objects. Additionally it can be used to detect these collisions. The MeshCollider uses a Mesh of an Object to build its Collider. The Mesh for the BodyCollider was also loaded from the *.fbx file similarly to the visual elements.

"},{"location":"Components/Traffic/NPCs/Vehicle/#wheels-colliders","title":"Wheels Colliders","text":"

WheelsColliders are an essential element from the point of view of driving vehicles on the road. They are the only ones that have contact with the roads and it is important that they are properly configured. Each vehicle, apart from the visual elements related to the wheels, should also have 4 colliders - one for each wheel.

To prevent inspector entry for WheelCollider the WheelColliderConfig has been developed. It ensures that friction is set to 0 and only wheel suspension and collisions are enabled.

Wheel Collider Config

For a better understanding of the meaning of WheelCollider we encourage you to read this manual.

"},{"location":"Components/Traffic/NPCs/Vehicle/#lod","title":"LOD","text":"

LOD provides dependence of the level of detail of the object depending on the ratio of the GameObject\u2019s screen space height to the total screen height. Vehicle models have only one LOD0 group, therefore there is no reduction in model complexity when it does not occupy a large part of the screen. It is only culled when it occupies less than 2% of the height.

"},{"location":"Components/Traffic/NPCs/Vehicle/#rigidbody","title":"Rigidbody","text":"

Rigidbody ensures that the object is controlled by the physics engine. The Mass of the vehicle should approximate its actual weight. In order for the vehicle to physically interact with other objects - react to collisions, Is Kinematic must be turned off. The Use Gravity should be turned on - to ensure the correct behavior of the body during movement. In addition, Interpolate should be turned on to ensure the physics engine's effects are smoothed out.

"},{"location":"Components/Traffic/NPCs/Vehicle/#npc-vehicle-script","title":"NPC Vehicle (script)","text":"

The script takes the Rigidbody and provides an inputs that allows the NPCVehicle to move. Script inputs give the ability to set the position and orientation of the vehicle, taking into account the effects of suspension and gravity. In addition, the script uses the CoM link reference to assign the center of mass of the vehicle to the Rigidbody.

Script inputs are used by RandomTrafficSimulator, which controls the vehicles on the scene - it is described here.

"},{"location":"Components/Traffic/NPCs/Vehicle/#input-data","title":"Input Data","text":"Category Type Description SetPosition Vector3 Move the NPCVehicle so that its x, z coordinates are same as the specified coordinates. Pitch and roll are determined by physical operations that take effects of suspension and gravity into account. SetRotation Vector3 Rotate the NPCVehicle so that its yaw becomes equal to the specified one. Vertical movement is determined by physical operations that take effects of suspension and gravity into account.

Visual Object Root is a reference to the parent aggregating visuals, it can be used to disable the appearance of visual elements of the NPCVehicle in the scene.

Whereas Bounds Represents an axis aligned bounding box of the NPCVehicle. It is used primarily to detect collisions between vehicles in the event of spawning, yielding and others. Moreover, vehicle bounds are displayed by Gizmos.

The settings of the remaining elements, i.e. the Axle and the Lights, are described here and here.

No Gizmo visualization

If you don't see Gizmo's visual elements, remember to turn them on.

"},{"location":"Components/Traffic/NPCs/Vehicle/#axle-settings","title":"Axle Settings","text":"

This part of the settings is responsible for the proper connection of visual elements with the collider for each wheel - described earlier. The objects configured in this section are used to control the vehicle - its wheel speed and steering angle, which are calculated based on the input values. Correct configuration is very important from the point of view of the NPCVehicle movement on the road.

"},{"location":"Components/Traffic/NPCs/Vehicle/#lights-settings","title":"Lights Settings","text":"

This part of the settings is related to the configuration of materials emission - used when a specific lighting is activated. There are 3 types of lights: Brake, Left Turn Signal and Right Turn Signal. Each of the lights has its visual equivalent in the form of a Mesh. In the case of NPCVehicle all of the lights are included in the Body object Mesh, which has many materials - including those related to lights.

For each type of light, the appropriate Material Index (equivalent of element index in mesh) and Lighting Color are assigned - yellow for Turn Signals, red for Break.

Lighting Intensity values are also configured - the greater the value, the more light will be emitted. This value is related to Lighting Exposure Weight parameter that is an exposure weight - the lower the value, the more light is emitted.

The brake light is switched on depending on the speed of the NPCVehicle, while RandomTrafficSimulator is responsible for switching the turn signals on and off.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/","title":"Add Random Traffic Environment","text":""},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#add-environment-for-random-traffic","title":"Add Environment for Random Traffic","text":"

This document describes the steps to properly configuer RandomTrafficSimulator in your environment.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#map-preparation","title":"Map preparation","text":"

The 3D map model should be added to the scene. Please make sure that the Environment component with appropriate mgrsOffsetPosition is attached to the root GameObject.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-traffic-lights","title":"Annotate Traffic Lights","text":"

Please attach TrafficLight component to all traffic light GameObjects placed on scene.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#load-lanelet","title":"Load Lanelet","text":"

The lanelet load process can be performed by opening AWSIM -> Random Traffic -> Load Lanelet at the top toolbar of Unity Editor.

You should be prompted with a similar window to the one presented below. Please adjust the parameters for the loading process if needed.

Waypoint settings affect the density and accuracy of the generated waypoints. The parameters are described below:

To generate the Lanelet2 map representation in your simulation, please click the Load button. Environment components should be generated and placed as child objects of the Environment GameObject. You can check their visual representation by clicking consecutive elements in the scene hierarchy.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-traffic-intersections","title":"Annotate Traffic Intersections","text":"

To annotate intersection please, add an empty GameObject named TrafficIntersections at the same level as the TrafficLanes GameObject.

For each intersection repeat the following steps:

  1. Add an GameObject named TrafficIntersection as a child object of the TrafficIntersections object.
  2. Attach a TrafficIntersection component to it.
  3. Add a BoxCollider as a component of GameObject. It's size and position should cover the whole intersection. This is used for detecting vehicles in the intersection.
  4. Set TrafficLightGroups. Each group is controlled to have different signals, so facing traffic lights should be added to the same group. These groupings are used in traffic signal control.
  5. Specify the signal control pattern.
"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-right-of-ways-on-uncontrolled-intersections","title":"Annotate right of ways on uncontrolled intersections","text":"

For the vehicles to operate properly it is needed to annotate the right of way of TrafficLane manually on intersections without traffic lights.

To set the right of way, please:

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#annotate-stop-lines","title":"Annotate stop lines","text":"

For each right turn lane that yields to the opposite straight or left turn lane, a stop line needs to be defined near the center of the intersection. If there is no visible stop line, a StopLine component should be added to the scene, near the center of the intersection and associated with TrafficLane.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#assign-intersection-trafficlanes","title":"Assign Intersection TrafficLanes","text":"

To make the yielding rules work properly, it is necessary to catagorize the TrafficLanes. The ones that belong to an intersection have the IntersectionLane variable set to true.

To automate the assignment of the corresponding IntersectionLane to each TrafficLane, the script AssignIntersectionTrafficLanes can be used.

  1. At the time of assignment, add it as a component to some object in the scene (e.g. to the Environment object).
  2. Disable the component (uncheck the checkbox next to the script name).
  3. Assign to TrafficLanesObjectsParent GameObject, which contains all TrafficLanes objects.
  4. Check all 4 options.
  5. Enable the component (check the checkbox next to the script name).

Check the log to see if all operations were completed:

As a result, the names of TrafficLane objects should have prefixes with sequential numbers and TrafficLane at intersections should be marked. TrafficLanes with IntersectionLane set to True are displayed by Gizmos in green color, if IntersectionLane is False their color is white.

"},{"location":"Components/Traffic/RandomTraffic/AddRandomTrafficEnvironment/#check-final-configuration","title":"Check final configuration","text":"

Once all the components are ready, the simulation can be run. Check carefully if the vehicles are moving around the map correctly. For each intersection, review the settings of the relevant components if vehicles are unable to proceed.

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/","title":"Random Traffic Simulator","text":""},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#random-traffic-simulator","title":"Random Traffic Simulator","text":"

The RandomTrafficSimulator simulates city traffic with respect to all traffic rules. The system allows for random selection of car models and the paths they follow. It also allows adding static vehicles in the simulation.

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#getting-started","title":"Getting Started","text":""},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#overview","title":"Overview","text":"

The random traffic system consists of the following components:

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#components-settings","title":"Components Settings","text":"

The following section describes Unity Editor components settings.

"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#random-traffic-simulator_1","title":"Random Traffic Simulator","text":"Parameter Description General Settings Seed Seed value for random generator Ego Vehicle Transform of ego vehicle Vehicle Layer Mask LayerMask that masks only vehicle(NPC and ego) colliders Ground Layer Mask LayerMask that masks only ground colliders of the map NPC Vehicle Settings Max Vehicle Count Maximum number of NPC vehicles to be spawned in simulation NPC Prefabs Prefabs representing controlled vehicles. They must have NPCVehicle component attached. Spawnable Lanes TrafficLane components where NPC vehicles can be spawned during traffic simulation Vehicle Config Parameters for NPC vehicle controlSudden Deceleration is a deceleration related to emergency braking Debug Show Gizmos Enable the checkbox to show editor gizmos that visualize behaviours of NPCs"},{"location":"Components/Traffic/RandomTraffic/RandomTrafficSimulator/#gizmos","title":"Gizmos","text":"

Gizmos are useful for checking current behavior of NPCs and its causes. Gizmos have a high computational load so please disable them if the simulation is laggy.

"},{"location":"Components/Traffic/RandomTraffic/YieldingRules/","title":"Yielding Rules","text":""},{"location":"Components/Traffic/RandomTraffic/YieldingRules/#yielding-rules","title":"Yielding rules","text":"

The RandomTrafficSimulator assumes that there are 10 phases of yielding priority:

RandomTrafficYielding scene

If you would like to see how RandomTrafficSimulator with yielding rules works or run some tests, we encourage you to familiarize yourself with the RandomTrafficYielding scene described in this section.

  1. NONE - state in which it is only checked if a vehicle is approaching the intersection. If yes, a transition to state ENTERING_INTERSECTION is made.

  2. ENTERING_INTERSECTION - state in which it is checked if any of the situations LANES_RULES_ENTERING_INTERSECTION, LEFT_HAND_RULE_ENTERING_INTERSECTION, INTERSECTION_BLOCKED occur, if yes the state of the vehicle is changed to one matching the situation - to determine if the vehicle must yield priority. If none of these situations occur only the entry into the intersection will result in a transition to AT_INTERSECTION.

  3. AT_INTERSECTION - state in which it is checked if any of the situations LANES_RULES_AT_INTERSECTION, LEFT_HAND_RULE_AT_INTERSECTION, FORCING_PRIORITY occur, if yes the state of the vehicle is changed to one matching the situation - to determine if the vehicle must yield priority. If none of these situations occur only leaving the intersection will result in a transition to NONE.

  4. INTERSECTION_BLOCKED - when vehicle A is approaching the intersection, it yields priority to vehicle B, which should yield priority, but is forcing it - this refers to a situation in which vehicle B has entered the intersection and has already passed its stop point vehicle B isn\u2019t going to stop but has to leave the intersection. Until now, vehicle A has continued to pass through the intersection without taking vehicle B into account, now it is checking if any vehicle is forcing priority (vehicle A has INTERSECTION_BLOCKED state). (vehicle A is red car with blue sphere, B is the white car to which it points)

  5. LEFT_HAND_RULE_ENTERING_INTERSECTION - vehicle A, before entering the intersection where the traffic lights are off, yields priority to vehicles (ex. B) that are approaching to the intersection and are on the left side of vehicle A. Until now, situations in which the lights are off were not handled. If a vehicle didn't have a red light and was going straight - it just entered the intersection. Now vehicle A checks if the vehicles on the left (ex. B) have a red light, if not it yields them priority. (vehicle A is truck car with gray sphere, B is the white car to which it points)

  6. LEFT_HAND_RULE_AT_INTERSECTION - when vehicle A is already at the intersection, yields priority to vehicles (ex. B) that are also at the intersection and are on its left side - in cases where no other yielding rules are resolved between them (i.e. there are no RightOfWayLanes between them). (vehicle A is red, B is white)

  7. LANES_RULES_ENTERING_INTERSECTION - when vehicle B intends to turn left and is approaching at the intersection where it needs to yield to vehicle A which is going straight ahead, then it goes to state LANES_RULES_ENTERING_INTERSECTION. The introduced changes take into account that a vehicle approaching the intersection considers not only the vehicles at the intersection but also those which are approaching it (at a distance of less than minimumDistanceToIntersection to the intersection). (vehicle B is truck with yellow sphere, A the white car to which it points)

  8. LANES_RULES_AT_INTERSECTION - when vehicle B intends to turn right and is already at the intersection where it needs to yield to vehicle A which is approaching the intersection, then it goes to state LANES_RULES_AT_INTERSECTION. The introduced changes take into account that a vehicle approaching the intersection considers not only the vehicles at the intersection but also those which are approaching it (at a distance of less than minimumDistanceToIntersection to the intersection). (vehicle B is car with red sphere, A the white car to which it points)

  9. FORCING_PRIORITY - state in which some vehicle B should yield priority to a vehicle A but doesn't - for some reason, most likely it could be some unusual situation in which all other rules have failed. Then vehicle A which is at intersection yields priority to a vehicle that is forcing priority. In such a situation, vehicle A transitions to state FORCING_PRIORITY. It is very rare to achieve this state, but it does happen.

"},{"location":"Components/Traffic/RandomTraffic/YieldingRules/#gizmos-markings","title":"Gizmos Markings","text":""},{"location":"Components/Traffic/TrafficComponents/","title":"Traffic Components","text":"

This section

This section is still under development!

This is a section that describes in detail all components related to simulated traffic in the Environment prefab.

"},{"location":"Components/Traffic/TrafficComponents/#architecture","title":"Architecture","text":"

The random traffic system consists of the following components:

The process of spawning a NPCVehicle and its later behavior control is presented on the following sequence diagram.

Sequence Diagram Composition

Please note that the diagram composition has been simplified to the level of GameObjects and chosen elements of the GameObjects for the purpose of improving readability.

"},{"location":"Components/Traffic/TrafficComponents/#lanelet2","title":"Lanelet2","text":"

Lanelet2 is a library created for handling a map focused on automated driving. It also supports ROS and ROS2 natively. In AWSIM Lanelet2 is used for reading and handling a map of all roads. Specifically it does contain all TrafficLanes and StopLines. You may also see us referring to the actual map data file (*.osm) as a Lanelet2.

Lanelet2 official page

If you want to learn more we encourage to visit the official project page.

"},{"location":"Components/Traffic/TrafficComponents/#randomtrafficsimulator","title":"RandomTrafficSimulator","text":"

Nomenclature

Please note that

are named RandomTrafficSimulator. Keep this in mind when reading the following page - so you don't get confused.

RandomTrafficSimulator simulates traffic with respect to all traffic rules. The system allows for random selection of car models and the paths they follow. It also allows adding static vehicles in the simulation.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene","title":"Link in the default Scene","text":"

The RandomTrafficSimulator consists of several GameObjects.

"},{"location":"Components/Traffic/TrafficComponents/#components","title":"Components","text":"

RandomTrafficSimulator only has one component: Traffic Manager (script) which is described below.

"},{"location":"Components/Traffic/TrafficComponents/#trafficmanager-script","title":"TrafficManager (script)","text":"

Traffic Manager (script) is responsible for all of top level management of the NPCVehicles. It managed spawning of NPCVehicles on TrafficLanes.

TrafficManager uses the concept of TrafficSimulators. One TrafficSimulator is responsible for managing its set of NPCVehicles. Every TrafficSimulator spawns its own NPCVehicles independently. The vehicles spawned by one TrafficSimulator do respect its configuration. TrafficSimulators can be interpreted as NPCVehicle spawners with different configurations each. Many different TrafficSimulators can be added to the TrafficManager.

If a random mode is selected (RandomTrafficSimulator) then NPCVehicles will spawn in random places (from the selected list) and drive in random directions. To be able to reproduce the behavior of the RandomTrafficSimulator a Seed can be specified - which is used for the pseudo-random numbers generation.

TrafficManager script also configures all of the spawned NPCVehicles, so that they all have common parameters

The Vehicle Layer Mask and Ground Layer Mask are used to make sure all vehicles can correctly interact with the ground to guarantee simulation accuracy.

Max Vehicle Count specifies how many NPCVehicles can be present on the scene at once. When the number of NPCVehicles on the scene is equal to this value the RandomTrafficSimulator stops spawning new vehicles until some existing vehicles drive away and disappear.

The EgoVehicle field provides the information about Ego vehicle used for correct behavior ofNPCVehicleswhen interacting with Ego.

Show Gizmos checkbox specifies whether the Gizmos visualization should be displayed when running the simulation.

Show Yielding Phase checkbox specifies whether yielding phases should be displayed by Gizmos - in the form of spheres above vehicles, details in the Markings section.

Show Obstacle Checking checkbox specifies whether obstacle checking should be displayed by Gizmos - in the form of boxes in front of vehicles

Show Spawn Points checkbox specifies whether spawn points should be displayed by Gizmos - in the form of flat cuboids on roads.

Gizmos performance

Gizmos have a high computational load. Enabling them may cause the simulation to lag.

As mentioned earlier - TrafficManager may contain multiple TrafficSimulators. The two available variants of TrafficSimulator are described below

TrafficSimulators should be interpreted as spawning configurations for some group of NPCVehicles on the scene.

"},{"location":"Components/Traffic/TrafficComponents/#random-traffic","title":"Random Traffic","text":"

When using RandomTrafficSimulator the NPCVehicle prefabs (NPC Prefabs) can be chosen as well as Spawnable Lanes. The later are the only TrafficLanes on which the NPCVehicles can spawn. Upon spawning one of the Spawnabe Lanes is chosen and - given the vehicle limits are not reached - one random NPCVehicle from the Npc prefabs list is spawned on that lane. After spawning, the NPCVehicle takes a random route until it drives out of the map - then it is destroyed.

The Maximum Spawns field specifies how many Vehicles should be spawned before this TrafficSimulator stops working. Set to 0 to disable this restriction.

"},{"location":"Components/Traffic/TrafficComponents/#route-traffic","title":"Route Traffic","text":"

When using Route traffic Simulator the NPCVehicle prefabs (NPC Prefabs) as well as Route can be chosen. The later is an ordered list of TrafficLanes that all spawned vehicles will drive on. Given the vehicle limit is not reached - the RouteTrafficSimulator will spawn one of the Npc Prefabs chosen randomly on the first Route element (Element 0). After the first vehicle drives off the next one will spawn according to the configuration. It is important for all Route elements to be connected and to be arranged in order of appearance on the map. The NPCVehicle disappears after completing the Route.

The Maximum Spawns field specifies how many Vehicles should be spawned before this TrafficSimulator stops working. Set to 0 to disable this restriction.

"},{"location":"Components/Traffic/TrafficComponents/#parameter-explanation","title":"Parameter explanation","text":"Parameter Description General Settings Seed Seed value for random generator Ego Vehicle Transform of ego vehicle Vehicle Layer Mask LayerMask that masks only vehicle(NPC and ego) colliders Ground Layer Mask LayerMask that masks only ground colliders of the map Culling Distance Distance at which NPCs are culled relative to EgoVehicle Culling Hz Culling operation cycle NPCVehicle Settings Max Vehicle Count Maximum number of NPC vehicles to be spawned in simulation NPC Prefabs Prefabs representing controlled vehicles. They must have NPCVehicle component attached. Spawnable Lanes TrafficLane components where NPC vehicles can be spawned during traffic simulation Vehicle Config Parameters for NPC vehicle controlSudden Deceleration is a deceleration related to emergency braking Debug Show Gizmos Enable the checkbox to show editor gizmos that visualize behaviours of NPCs"},{"location":"Components/Traffic/TrafficComponents/#traffic-light-script","title":"Traffic Light (script)","text":"

Traffic Light (script) is a component added to every TrafficLight on the scene. It is responsible for configuring the TrafficLight behavior - the bulbs and their colors.

The Renderer filed points to the renderer that should be configured - in this case it is always a TrafficLight renderer.

Bulbs Emission Config is a list describing available colors for this Traffic Light. Every element of this list configures the following

The Bulb Material Config is a list of available bulbs in a given Traffic Light. Every element describes a different bulb. Every bulb has the following aspects configured

"},{"location":"Components/Traffic/TrafficComponents/#trafficintersections","title":"TrafficIntersections","text":"

TrafficIntersection is a representation of a road intersection. It consists of several components. TrafficIntersection is used in the Scene for managing TrafficLights. All Traffic Lights present on one Traffic Intersection must be synchronized - this is why the logic of TrafficLight operation is included in the TrafficIntersection.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene_1","title":"Link in the default Scene","text":"

Every TrafficIntersection has its own GameObject and is added as a child of the aggregate TrafficIntersections Object. TrafficIntersections are elements of an Environment, so they should be placed as children of an appropriate Environment Object.

"},{"location":"Components/Traffic/TrafficComponents/#components_1","title":"Components","text":"

TrafficIntersection has the following components:

"},{"location":"Components/Traffic/TrafficComponents/#collider","title":"Collider","text":"

Every TrafficIntersection contains a Box Collider element. It needs to accurately cover the whole area of the TrafficIntersection. Box Collider - together with the Traffic Intersection (script) - is used for detecting vehicles entering the TrafficIntersection.

"},{"location":"Components/Traffic/TrafficComponents/#traffic-intersection-script","title":"Traffic Intersection (script)","text":"

Traffic Intersection (script) is used for controlling all TrafficLights on a given intersection. The Collider Mask field is a mask on which all Vehicle Colliders are present. It - together with Box Collider - is used for keeping track of how many Vehicles are currently present on the Traffic Intersection. The Traffic Light Groups and Lighting Sequences are described below.

"},{"location":"Components/Traffic/TrafficComponents/#traffic-light-groups","title":"Traffic Light Groups","text":"

Traffic Light Group is a collection of all Traffic Lights that are in the same state at all times. This includes all redundant Traffic Lights shining in one direction as well as the ones in the opposite direction. In other words - as long as two Traffic Lights indicate exactly the same thing they should be added to the same Traffic Light Group. This grouping simplifies the creation of Lighting Sequences.

"},{"location":"Components/Traffic/TrafficComponents/#lighting-sequences","title":"Lighting Sequences","text":"

Lighting Sequences is the field in which the whole intersection Traffic Lights logic is defined. It consists of many different Elements. Each Element is a collection of Orders that should take an effect for the period of time specified in the Interval Sec field. Lighting Sequences Elements are executed sequentially, in order of definition and looped - after the last element sequence goes back to the first element.

The Group Lighting Orders field defines which Traffic Light Groups should change their state and how. For every Group Lighting Orders Element the Traffic Lights Group is specified with the exact description of the goal state for all Traffic Lights in that group - which bulb should light up and with what color.

One Lighting Sequences Element has many Group Lighting Orders, which means that for one period of time many different orders can be given. E.g. when Traffic Lights in one direction change color to green - Traffic Lights in the parallel direction change color to red.

Traffic Light state persistance

If in the given Lighting Sequences Element no order is given to some Traffic Light Group - this Group will keep its current state. When the next Lighting Sequences Element activates - the given Traffic Light Group will remain in an unchanged state.

Lighting Sequence Sample - details

Description Editor Traffic Lights in Pedestrian Group 1change color to flashing green. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Pedestrian Group 1change color to solid red. Other Groups keep theircurrent state. This state lasts for 1 second. Traffic Lights in Vehicle Group 1change color to solid yellow. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Vehicle Group 1change color to solid red. Other Groups keep theircurrent state. This state lasts for 3 seconds. Traffic Lights in Vehicle Group 2change color to solid green. Traffic Lights in Pedestrian Group 2change color to solid green. Other Groups keep theircurrent state. This state lasts for 15 seconds. Traffic Lights in Pedestrian Group 2change color to flashing green. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Pedestrian Group 2change color to solid red. Other Groups keep theircurrent state. This state lasts for 1 second. Traffic Lights in Vehicle Group 2change color to solid yellow. Other Groups keep theircurrent state. This state lasts for 5 seconds. Traffic Lights in Vehicle Group 2change color to solid red. Other Groups keep theircurrent state. This state lasts for 3 second. Sequence loops back to thefirst element of the list."},{"location":"Components/Traffic/TrafficComponents/#trafficlanes","title":"TrafficLanes","text":"

TrafficLane is a representation of a short road segment. It consists of several waypoints that are connected by straight lines. TrafficLanes are used as a base for a RandomTrafficSimulator. They allow NPCVehicles to drive on the specific lanes on the road and perform different maneuvers with respect to the traffic rules. TrafficLanes create a network of drivable roads when connected.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene_2","title":"Link in the default Scene","text":"

Every TrafficLane has its own GameObject and is added as a child of the aggregate TrafficLanes Object. TrafficLanes are an element of an Environment, so they should be placed as children of an appropriate Environment Object.

TrafficLanes can be imported from the lanelet2 *.osm file.

"},{"location":"Components/Traffic/TrafficComponents/#components_2","title":"Components","text":"

TrafficLane consists of an Object containing Traffic Lane (script).

TrafficLane has a transformation property - as every Object in Unity - however it is not used in any way. All details are configured in the Traffic Lane (script), the information in Object transformation is ignored.

"},{"location":"Components/Traffic/TrafficComponents/#traffic-lane-script","title":"Traffic Lane (script)","text":"

Traffic Lane (script) defines the TrafficLane structure. The Waypoints field is an ordered list of points that - when connected with straight lines - create a TrafficLane.

Traffic Lane (script) coordinate system

Waypoints are defined in the Environment coordinate system, the transformation of GameObject is ignored.

Turn Direction field contains information on what is the direction of this TrafficLane - whether it is a right or left turn or straight road.

Traffic lanes are connected using Next Lanes and Prev Lanes fields. This way individual TrafficLanes can create a connected road network. One Traffic Lane can have many Next Lanes and Prev Lanes. This represents the situation of multiple lanes connecting to one or one lane splitting into many - e.g. the possibility to turn and to drive straight.

Right Of Way Lanes are described below.

Every TrafficLane has to have a Stop Line field configured when the Stop Line is present on the end of the TrafficLane. Additionally the Speed Limit field contains the highest allowed speed on given TrafficLane.

"},{"location":"Components/Traffic/TrafficComponents/#right-of-way-lanes","title":"Right Of Way Lanes","text":"

Right Of Way Lanes is a collection of TrafficLanes. Vehicle moving on the given TrafficLane has to give way to all vehicles moving on every Right Of Way Lane. It is determined based on basic traffic rules. Setting Right Of Way Lanes allows RandomTrafficSimulator to manage all NPCVehicles so they follow traffic rules and drive safely.

In the Unity editor - when a TrafficLane is selected - aside from the selected TrafficLane highlighted in blue, all Right Of Way Lanes are highlighted in yellow.

Right Of Way Lanes Sample - details

The selected TrafficLane (blue) is a right turn on an intersection. This means, that before turning right the vehicle must give way to all vehicles driving from ahead - the ones driving straight as well as the ones turning left. This can be observed as TrafficLanes highlighted in yellow.

"},{"location":"Components/Traffic/TrafficComponents/#stoplines","title":"StopLines","text":"

StopLine is a representation of a place on the road where vehicles giving way to other vehicles should stop and wait. They allow RandomTrafficSimulator to manage NPCVehicles in safe and correct way - according to the traffic rules. All possible locations where a vehicle can stop in order to give way to other vehicles - that are enforced by an infrastructure, this does not include regular lane changing - need to be marked with StopLines.

"},{"location":"Components/Traffic/TrafficComponents/#link-in-the-default-scene_3","title":"Link in the default Scene","text":"

Every StopLine has its own GameObject and is added as a child of the aggregate StopLines Object. Stop Lines are an element of an Environment, so they should be placed as children of an appropriate Environment Object.

StopLines can be imported from the lanelet2 *.osm file.

"},{"location":"Components/Traffic/TrafficComponents/#components_3","title":"Components","text":"

StopLine consists of an Object containing Stop Line (script).

Stop Line has a transformation property - as every Object in Unity - however it is not used in any way. All details are configured in the Traffic Lane (script), the information in Object transformation is ignored.

"},{"location":"Components/Traffic/TrafficComponents/#stop-line-script","title":"Stop Line (script)","text":"

Stop Line (script) defines StopLine configuration. The Points field is an ordered list of points that - when connected - create a StopLine. The list of points should always have two elements that create a straight StopLine.

Stop Line (script) coordinate system

Points are defined in the Environment coordinate system, the transformation of GameObject is ignored.

The Has Stop Sign field contains information whether the configured StopLine has a corresponding StopSign on the scene.

Every Stop Line needs to have a Traffic Light field configured with the corresponding Traffic Light. This information allows the RandomTrafficSimulator to manage the NPCVehicles in such a way that they respect the Traffic Lights and behave on the Traffic Intersections correctly.

"},{"location":"Components/Traffic/TrafficComponents/#gizmos","title":"Gizmos","text":"

Gizmos are a in-simulation visualization showing current and future moves of the NPCVehicles. They are useful for checking current behavior of NPCs and its causes. On the Scene they are visible as cuboid contours indicating which TrafficLanes will be taken by each vehicle in the near future.

Gizmos computing

Gizmos have a high computational load. Please disable them if the simulation is laggy.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/","title":"Add Vehicle","text":"

Ego Vehicle Component

In this tutorial we will create a new EgoVehicle. To learn more about what an EgoVehicle is in AWSIM please visit Ego Vehicle description page.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#cerate-an-object","title":"Cerate an Object","text":"

Add a child Object to the Simulation called EgoVehicle.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-rigidbody","title":"Add a Rigidbody","text":"
  1. While having a newly created EgoVehicle Object selected, in the Inspector view click on the 'Add Component' button, search for Rigidbody and select it.

  2. Configure Mass and Drag with the correct values for your Vehicle.

  3. Configure Interpolation and Collision Detection.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-visual-elements","title":"Add visual elements","text":"

For a detailed explanation hwo to add visual elements of your Vehicle check out this dedicated tutorial.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-canter-of-mass","title":"Add a Canter of Mass","text":"

To add a center of mass to your vehicle you have to add a CoM child Object to the EgoVehicle Object (the same as in steps before).

Then just set the position of the CoM Object in the Inspector view to represent real-world center of mass of the Vehicle.

How do I know what is the Center of Mass of my Vehicle

The best way is to obtain a Center of Mass information from your Vehicle documentation.

However, if this is not possible, you can try to estimate the Center of Mass of your vehicle. Best practice is to set the estimated Center of Mass as the following

Note: This will vary very much depending on your Vehicle construction. For the best possible result please follow the Vehicle specifications.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-reflection-probe","title":"Add a Reflection Probe","text":"
  1. Add a new Object called Reflection Probe as a child to the EgoVehicle Object.

  2. Click on the 'Add Component' button, in the windows that pops-up search for Reflection Probe and select it.

    !!!note Please note that with Reflection Probe there should also be automatically added a `HD Additional Reflection Data Script.

      ![reflection probe additional script](reflection_probe_additional_script.png)\n
  3. Configure the Reflection Probe as you wish.

    !!! example \"Example Configuration\" Below you can see an example configuration of the Reflection Probe.

      ![reflection probe configuration](reflection_probe_configuration.png)\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-colliders","title":"Add Colliders","text":"

For a detailed explanation how to add colliders to your Vehicle check out this dedicated tutorial.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-base-for-sensors-urdf","title":"Add a base for sensors (URDF)","text":"

You will most certainly want to add some sensors to your EgoVehicle. First you need to create a parent Object for all those sensors called URDF. To do this we will add a child Object URDF to the EgoVehicle Object.

This Object will be used as a base for all sensors we will add later.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-script","title":"Add a Vehicle Script","text":"

To be able to control your EgoVehicle you need a Vehicle Script.

  1. Add the Vehicle Script to the EgoVehicle Object.

  2. Configure the Vehicle Script Axle Settings and Center Of Mass Transform.

Testing

It is not possible to test this Script alone, but you can test the following

If components listed above work correctly this means the Vehicle Script works correctly too.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-keyboard-input-script","title":"Add a Vehicle Keyboard Input Script","text":"

You can control your EgoVehicle in the simulation manually with just one Script called Vehicle Keyboard Input.

If you want to add it just click the 'Add Component' button on the EgoVehicle Object and search for Vehicle Keyboard Input Script and select it.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-visual-effect-script","title":"Add a Vehicle Visual Effect Script","text":"

For a visual indication of a Vehicle status you will need a Vehicle Visual Effect Script. To add and configure it follow the steps below.

  1. Add a Vehicle Visual Effect Script by clicking 'Add Component' button, searching for it and selecting it.

  2. Configure the lights.

    !!!note In this step we will configure only Brake Lights, but should repeat this for every Light. The process is almost the same for all Lights - just change the mesh renderer and lighting settings according to your preference.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#how-to-test","title":"How to test","text":"

After configuring Vehicle Visual Effect Script it is advised to test whether everything works as expected.

  1. Make sure you have a Vehicle Keyboard Input Script added and that it is enabled.

  2. If your scene does not have any models yet please turn the gravity off in Rigidbody configuration so that the Vehicle does not fall down into infinity.

  3. Start the simulation.

  4. Test the Turn Signals.

    You can control the Turn Signals with a Vehicle Keyboard Input Script. Activate the Turn Signals with one of the following keys

    - 1 - Left Turn Signal - 2 - Right Turn Signal - 3 - Hazard Lights - 4 - Turn Off all Signals

  5. Test the Lights.

    You can control the lights by \"driving\" the Vehicle using Vehicle Keyboard Input Script. Although if you have an empty Environment like in this tutorial the Vehicle won't actually drive.

    To test Brake Lights change the gear to Drive by pressing D on the keyboard and activate braking by holding arrow down.

    To test the Reverse Light change the gear to Reverse by pressing R on the keyboard. The Reverse Light should turn on right away.

Camera tip

If you have not configured a camera or configured it in such a way that you can't see the Vehicle well you can still test most of the lights by changing views.

Pleas note that this method won't work for testing Brake Lights, as for them to work you need to keep the arrow down button pressed all the time.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-ros-input-script","title":"Add a Vehicle Ros Input Script","text":"

For controlling your Vehicle with autonomous driving software (e.g. Autoware) you need a Vehicle Ros Input Script.

Disable Vehicle Keyboard Input Script

If you have added a Vehicle Keyboard Input Script in your Vehicle please disable it when using the Vehicle Ros Input Script.

Not doing so will lead to the vehicle receiving two different inputs which will cause many problems.

Add it to the EgoVehicle Object by clicking on the 'Add Component' button, searching for it and selecting it.

The Script is configured to work with Autoware by default, but you can change the topics and Quality of Service settings as you wish.

Note

The Vehicle should be configured correctly, but if you have many Vehicles or something goes wrong, please select the right Vehicle in the Vehicle field by clicking on the small arrow icon and choosing the right item from the list.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#how-to-test_1","title":"How to test","text":"

The best way to test the Vehicle Ros Input Script is to run Autoware.

  1. Run the Scene same as on this page.
  2. Launch only the Autoware like on this page
  3. Plan a path in Autoware like here, if the Vehicle moves in AWSIM correctly then the Script is configured well.
"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-sensors","title":"Add Sensors","text":"

For a detailed explanation how to add sensors to your Vehicle check out this dedicated tutorial.

"},{"location":"Components/Vehicle/AddNewVehicle/AddAVehicle/#add-a-vehicle-to-scene","title":"Add a Vehicle to Scene","text":"

First you will have to save the Vehicle you created as a prefab, to easily add it later to different Scenes.

  1. Open the Vehicles directory in the Project view (Assets/AWSIM/Prefabs/Vehicles)
  2. Drag the Vehicle Object from the Hierarchy view to the Vehicles directory

After that, you can add the Vehicle you created to different Scenes by dragging it from Vehicles directory to the Hierarchy of different Scenes.

"},{"location":"Components/Vehicle/AddNewVehicle/AddColliders/","title":"Add Colliders","text":"

Next you need to add Colliders to your Vehicle. To do this follow the steps below.

  1. Add a child Object called Colliders to the EgoVehicle Object.

  2. Shift parent Object Colliders accordingly as in earlier steps where we shifted Models.

"},{"location":"Components/Vehicle/AddNewVehicle/AddColliders/#add-a-vehicle-collider","title":"Add a Vehicle Collider","text":"
  1. Add a child Object Collider to the Colliders Object.

  2. Add a Mesh Collider component to the Collider Object by clicking on the 'Add Component' button in the Inspector view and searching for it.

  3. Click on the arrow in mesh selection field and from the pop-up window select your collider mesh. Next click on the check-box called Convex, by now your collider mesh should be visible in the editor.

"},{"location":"Components/Vehicle/AddNewVehicle/AddColliders/#add-wheel-colliders","title":"Add Wheel Colliders","text":"
  1. Add a child Object Wheels to the Colliders Object.

Note

In this tutorial we will add only one wheel collider, but you should repeat the step for all 4 wheels. That is, follow the instructions that follow this message for every wheel your Vehicle has.

  1. Add a child Object FrontLeftWheel to the Wheels Object.

  2. Add a Wheel Collider component to the FrontLeftWheel Object by clicking 'Add Component' and searching for it.

  3. Add a Wheel Script to the FrontLeftWheel Object by clicking 'Add Component' and searching for it.

  4. Drag FrontLeftWheel Object from the WheelVisuals to the Wheel Visual Transform field.

  5. Add a Wheel Collider Config Script to the FrontLeftWheel Object by clicking 'Add Component' and searching for it.

  6. Configure the Wheel Collider Config Script so that the Vehicle behaves as you wish.

  7. Set the Transform of FrontLeftWheel Object to match the visuals of your Vehicle.

Successful configuration

If you have done everything right your Colliders Object should look similar to the one following.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/","title":"Add Sensors","text":""},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#awsim-sensors","title":"AWSIM Sensors","text":"

There is a number of different sensors available in AWSIM. Below we present a list of sensors with links to their individual pages.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-links-for-sensors","title":"Add links for sensors","text":"

Best practice is to replicate a ROS sensors transformations tree in Unity using Objects.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#coordinate-system-conversion","title":"Coordinate system conversion","text":"

Please note that Unity uses less common left-handed coordinate system. Please keep this in mind while defining transformations. More details about right-handed and left-handed systems can be found here.

To simplify the conversion process always remember that any point in ROS coordinate system (x, y, z) has an equivalent in the Unity coordinate system being (-y, z, x).

The same can be done with the rotation. ROS orientation described with roll, pitch and yaw (r, p, y) can be translated to Unity Rotation as follows (p, -y, -r).

Unit conversion

Please remember to convert the rotation units. ROS uses radians and Unity uses degrees. The conversion from radians (rad) to degrees (deg) is as follows.

deg = rad * 180 / PI\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-transformations-tree","title":"Add transformations tree","text":"

URDF

Before following this tutorial please make sure you have an URDF Object like it is shown shown in this section.

First we will have to add a base_link which is the root of all transformations.

Add a base_link Object as a child to the URDF Object.

base_link transformation

Please remember to set an appropriate transformation of the base_link Object so that it is identical as the base_link used in ROS in reference to the Vehicle.

This is very important, as a mistake here will result in all subsequent sensors being misplaced.

Inside the base_link we will represent all transformations contained in the ROS transformations tree.

You will have to check your Vehicle specific configuration. You can do this in many ways, for example:

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-one-sensor-link","title":"Add one sensor link","text":"

Note

In this step we will only add one sensor link. You will have to repeat this step for every sensor you want to add to your Vehicle.

Let's say we want to add a LiDAR that is facing right.

We have the following configuration files.

base_link:\n    sensor_kit_base_link:\n        x: 0.9\n        y: 0.0\n        z: 2.0\n        roll: -0.001\n        pitch: 0.015\n        yaw: -0.0364\n
sensor_kit_base_link:\n    velodyne_right_base_link:\n        x: 0.0\n        y: -0.56362\n        z: -0.30555\n        roll: -0.01\n        pitch: 0.71\n        yaw: -1.580\n

We can clearly see the structure of transformation tree. The transformations are as follows.

base_link -> sensor_kit_base_link -> velodyne_right_base_link\n

We need to start adding these transformation from the root of the tree. We will start with the sensor_kit_base_link, as the base_link already exists in our tree.

  1. The first step is to add an Object named the same as the transformation frame (sensor_kit_base_link).

  2. Next we have to convert the transformation from ROS standard to the Unity standard. This is done with the formulas show in this section.

    The result of conversion of the coordinate systems and units is shown below.

    Position:\n(0.9, 0.0, 2.0)             ->  (0.0, 2.0, 0.9)\nRotation:\n(-0.001, 0.015, -0.0364)    ->  (0.8594, 2.0856, 0.0573)\n

    The resulting sensor_kit_base_link Object transformation is shown below.

Now the same has to be done with the velodyne_right_base_link.

  1. Add transformation Object (velodyne_right_base_link).

    !!!info Remember to correctly set the child Object, in this case we use sensor_kit_base_link as a child, because this is what the .yaml file says.

  2. Convert the transformation into Unity coordinate system.

    The correct transformation is shown below.

    Position:\n(0, -0.56362, -0.30555)     ->  (0.56362, -0.30555, 0)\nRotation:\n(-0.01, 0.71, -1.580)       ->  (40.68, 90.5273, 0.573)\n

    The final velodyne_right_base_link Object transformation is shown below.

Success

If you have done everything right, after adding all of the sensor links your URDF Object tree should look something like the one following.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-sensors","title":"Add sensors","text":"

After adding links for all sensors you need to add the actual sensors into your Vehicle.

Sensor position

Please keep in mind, that we have created the sensor links in order to have an accurate transformations for all of the sensors. This implies that the Sensor Object itself can not have any transformation.

If one of your Sensors, after adding it to the scene, is mispositioned, check whether the transformation is set to identity (position and rotation are zeros).

When adding sensors almost all of them will have some common fields.

"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-vehicle-status-sensor","title":"Add a Vehicle Status Sensor","text":"

To add a Vehicle Status Sensor to your Vehicle simply locate the following directory in the Project view and drag a prefab of this Sensor into the URDF Object.

Assets/AWSIM/Prefabs/Sensors\n

Next in the Inspector View select your Vehicle.

ROS message example

In this example you can see what a valid message from the Vehicle Status Sensor can look like.

$ ros2 topic echo --once /vehicle/status/velocity_status\nheader:\n  stamp:\n    sec: 17\n    nanosec: 709999604\n  frame_id: base_link\nlongitudinal_velocity: 0.004912620410323143\nlateral_velocity: -0.005416259169578552\nheading_rate: 0.006338323466479778\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-lidar","title":"Add a LiDAR","text":"

Scene Manager

Before continuing with this tutorial please check out a dedicated one focused on Scene Manager.

To add a LiDAR to your Vehicle you will have to drag a model of the LiDAR to the link tree you have created in the earlier step.

You can use the predefined RGL LiDAR models or any other LiDAR models. In this tutorial we will be using RGL VelodyneVLP16 LiDAR model.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors/RobotecGPULidars\n

LiDAR noise configuration

LiDAR Sensor in simulation is returning a perfect result data. This is not an accurate representation of the real-world.

LiDAR Sensor addresses this issue by applying a simulated noise to the output data. You can configure the noise parameters in the Inspector View under Configuration -> Noise Params fields.

You can optionally remove the noise simulation by unchecking the Apply Distance/Angular Gaussian Noise.

You can also change the ranges of the LiDAR detection.

There is also a possibility to configure the visualization of the Point Cloud generated by the LiDAR. E.g. change the hit-point shape and size.

ROS message example

In this example you can see what a valid message from the LiDAR Sensor can look like.

$ ros2 topic echo --once /lidar/pointcloud\nheader:\n  stamp:\n    sec: 20\n    nanosec: 589999539\n  frame_id: world\nheight: 1\nwidth: 14603\nfields:\n- name: x\n  offset: 0\n  datatype: 7\n  count: 1\n- name: y\n  offset: 4\n  datatype: 7\n  count: 1\n- name: z\n  offset: 8\n  datatype: 7\n  count: 1\n- name: intensity\n  offset: 16\n  datatype: 7\n  count: 1\n- name: ring\n  offset: 20\n  datatype: 4\n  count: 1\nis_bigendian: false\npoint_step: 24\nrow_step: 350472\ndata:\n- 156\n- 218\n- 183\n- 62\n- 0\n- 189\n- 167\n- 187\n- 32\n- 58\n- 173\n- 189\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 1\n- 0\n- 0\n- 0\n- 198\n- 129\n- 28\n- 63\n- 0\n- 6\n- 230\n- 58\n- 128\n- 184\n- 93\n- 61\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 9\n- 0\n- 0\n- 0\n- 92\n- 2\n- 194\n- 62\n- 0\n- 141\n- 42\n- 187\n- 128\n- 89\n- 139\n- 189\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 2\n- 0\n- 0\n- 0\n- 187\n- 168\n- 42\n- 63\n- 0\n- 159\n- 175\n- 59\n- 160\n- 243\n- 185\n- 61\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 10\n- 0\n- 0\n- 0\n- 119\n- 186\n- 204\n- 62\n- 0\n- 254\n- 23\n- 59\n- 128\n- 143\n- 41\n- 189\n- 0\n- 0\n- 0\n- 0\n- 0\n- 0\n- 200\n- 66\n- 3\n- 0\n- 0\n- 0\n- 65\n- 241\n- 59\n- 63\n- 128\n- 0\n- 252\n- 187\n- '...'\nis_dense: true\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-an-imu","title":"Add an IMU","text":"

To add an IMU to your Vehicle you will have to drag a model of the IMU to the link tree you have created in the earlier step.

You can use the provided or your own IMU Sensor. In this tutorial we will be using IMU Sensor provided with AWSIM.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors\n

ROS message example

In this example you can see what a valid message from the IMU Sensor can look like.

$ ros2 topic echo --once /sensing/imu/tamagawa/imu_raw\nheader:\n  stamp:\n    sec: 20\n    nanosec: 589999539\n  frame_id: tamagawa/imu_link\norientation:\n  x: 0.0\n  y: 0.0\n  z: 0.0\n  w: 1.0\norientation_covariance:\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\nangular_velocity:\n  x: 0.014335081912577152\n  y: 0.008947336114943027\n  z: -0.008393825963139534\nangular_velocity_covariance:\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\nlinear_acceleration:\n  x: 0.006333829835057259\n  y: -0.005533283110707998\n  z: -0.0018753920448943973\nlinear_acceleration_covariance:\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n- 0.0\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-gnss","title":"Add a GNSS","text":"

To add a GNSS Sensor to your Vehicle you will have to drag a model of the GNSS to the link tree you have created in the earlier step.

You can use the provided or your own GNSS Sensor. In this tutorial we will be using GNSS Sensor provided with AWSIM.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors\n

ROS message example

In this example you can see what a valid message from the GNSS Sensor can look like.

$ ros2 topic echo --once /sensing/gnss/pose\nheader:\n  stamp:\n    sec: 8\n    nanosec: 989999799\n  frame_id: gnss_link\npose:\n  position:\n    x: 81656.765625\n    y: 50137.5859375\n    z: 44.60169219970703\n  orientation:\n    x: 0.0\n    y: 0.0\n    z: 0.0\n    w: 0.0\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-camera","title":"Add a Camera","text":"

To add a Camera Sensor to your Vehicle you will have to drag a model of the Camera to the link tree you have created in the earlier step.

Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.

Assets/AWSIM/Prefabs/Sensors\n

You can configure some aspects of the Camera to your liking.

E.g. you can set the field of view (fov) of the camera by changing the Field of View field or manipulating the physical camera parameters like Focal Length.

The important thing is to configure the Camera Sensor Script correctly.

Always check whether the correct Camera Object is selected and make sure that Distortion Shader and Ros Image Shader are selected.

Example Camera Sensor Script configuration

You can add the live Camera preview onto the Scene. To do this select the Show checkbox. Additionally you can change how the preview is displayed. Change the Scale value to control the size of the preview (how many times smaller the preview will be compared to the actual screen size).

Move the preview on the screen by changing the X Axis and Y Axis values on the Image On Gui section.

Camera preview example

Testing camera with traffic light recognition

You can test the Camera Sensor traffic light recognition by positioning the vehicle on the Unity Scene in such a way that on the Camera preview you can see the traffic lights.

Remember to lock the Inspector view on Camera Object before dragging the whole Vehicle - this way you can see the preview while moving the vehicle.

Run the Scene the same as on this page.

Launch only the Autoware like on this page.

By default you should see the preview of traffic light recognition visualization in the bottom left corner of Autoware.

Traffic lights recognition example in Autoware

ROS message example

In this example you can see what a valid message from the Camera Sensor can look like.

$ ros2 topic echo --once /sensing/camera/traffic_light/image_raw\nheader:\n  stamp:\n    sec: 14\n    nanosec: 619999673\n  frame_id: traffic_light_left_camera/camera_optical_link\nheight: 1080\nwidth: 1920\nencoding: bgr8\nis_bigendian: 0\nstep: 5760\ndata:\n- 145\n- 126\n- 106\n- 145\n- 126\n- 106\n- 145\n- 126\n- 106\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 105\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 104\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 126\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 103\n- 145\n- 124\n- 101\n- 145\n- 124\n- 101\n- 145\n- 124\n- 101\n- 145\n- 124\n- 101\n- 145\n- 123\n- 101\n- 145\n- 123\n- 101\n- 145\n- 123\n- 101\n- 145\n- 123\n- '...'\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#add-a-pose-sensor","title":"Add a Pose Sensor","text":"

To add a Pose Sensor to your Vehicle simply locate the following directory in the Project view and drag a prefab of this Sensor into the base_link Object.

Assets/AWSIM/Prefabs/Sensors\n

ROS message example

In this example you can see what a valid message from the Pose Sensor can look like.

$ ros2 topic echo --once /awsim/ground_truth/vehicle/pose\nheader:\n  stamp:\n    sec: 5\n    nanosec: 389999879\n  frame_id: base_link\npose:\n  position:\n    x: 81655.7578125\n    y: 50137.3515625\n    z: 42.8094367980957\n  orientation:\n    x: -0.03631274029612541\n    y: 0.0392342209815979\n    z: 0.02319677732884884\n    w: 0.9983005523681641\n---\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddSensors/#test-a-sensor","title":"Test a Sensor","text":"

You can test whether the Sensor works correctly in several ways.

  1. Check whether the configuration is correct.

    In terminal source ROS with the following line (only if you haven't done so already).

    source /opt/ros/humble/setup.bash\n

    Check the details about the topic that your Sensor is broadcasting to with the following command.

    ros2 topic info -v <topic_name>\n

    !!!example In this example we can see that the message is broadcasted by AWSIM and nobody is listening. We can also examine the Quality of Service settings.

      ```log\n  $ ros2 topic info -v /awsim/ground_truth/vehicle/pose\n  Type: geometry_msgs/msg/PoseStamped\n\n  Publisher count: 1\n\n  Node name: AWSIM\n  Node namespace: /\n  Topic type: geometry_msgs/msg/PoseStamped\n  Endpoint type: PUBLISHER\n  GID: 01.10.13.11.98.7a.b1.2a.ee.a3.5a.11.00.00.07.03.00.00.00.00.00.00.00.00\n  QoS profile:\n    Reliability: RELIABLE\n    History (Depth): KEEP_LAST (1)\n    Durability: VOLATILE\n    Lifespan: Infinite\n    Deadline: Infinite\n    Liveliness: AUTOMATIC\n    Liveliness lease duration: Infinite\n\n  Subscription count: 0\n\n  ```\n
  2. Check whether correct information is broadcasted.

    In terminal source ROS with the following line (only if you haven't done so already).

    source /opt/ros/humble/setup.bash\n

    View one transmitted message.

    ros2 topic echo --once <topic_name>\n

    !!!example In this example we can see the Vehicles location at the moment of executing the command.

      **NOTE:** The position and orientation are relative to the frame in the `header/frame_id` field (`base_link` in this example).\n\n  ```log\n  $ ros2 topic echo --once /awsim/ground_truth/vehicle/pose\n  header:\n    stamp:\n      sec: 46\n      nanosec: 959998950\n    frame_id: base_link\n  pose:\n    position:\n      x: 81655.7265625\n      y: 50137.4296875\n      z: 42.53997802734375\n    orientation:\n      x: 0.0\n      y: -9.313260163068549e-10\n      z: -6.36646204504876e-12\n      w: 1.0\n  ---\n  ```\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/","title":"Add Visual Elements","text":"

Your EgoVehicle needs many individual visual parts. Below we will add all needed visual elements.

First in EgoVehicle Object add a child Object called Models.

Inside Models Object we will add all visual models of our EgoVehicle.

"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#add-a-body","title":"Add a Body","text":"

First you will need to add a Body of your Vehicle. It will contain many parts, so first lets create a Body parent Object.

Next we will need to add Car Body

  1. Add a child Object BodyCar to the Body Object.

  2. To the BodyCar Object add a Mesh Filter.

    Click on the 'Add Component' button, search for Mesh Filter and select it. Next search for mesh of your vehicle and select it in the Mesh field.

  3. To the BodyCar Object add a Mesh Renderer.

    Click on the 'Add Component' button, search for Mesh Filter and select it

  4. Specify Materials.

    You need to specify what materials will be used for rendering your EgoVehicle model. Do this by adding elements to the Materials list and selecting the materials you wish to use as shown below.

    Add as many materials as your model has sub-meshes.

    !!!tip When you add too many materials, meaning there will be no sub-meshes to apply these materials to, you will see this warning. In such a case please remove materials until this warning disappears.

      ![mesh renderer too many materials](mesh_renderer_too_many_materials.png)\n
"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#add-interactive-body-parts","title":"Add interactive Body parts","text":"

In this step we will add the following parts

Info

It may seem like all of the elements above can be parts of the Body mesh, but it is important for these parts to be separate, because we need to be able to make them interactive (e.g. flashing turn signals).

Other good reason for having different meshes for Vehicle parts is that you have a Vehicle model, but for the simulation you need to add e.g. a roof rack with sensors - which can be achieved by adding more meshes.

Note

We will illustrate this step only for Break Light, but you should repeat this step of the tutorial for each element of the list above.

  1. Add a child Object to the Body Object.

  2. Add a Mesh Filter and select the mesh (the same as in section before).

  3. Add a Mesh Renderer and select the materials (the same as in section before).

"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#add-wheels","title":"Add Wheels","text":"

In this step we will add individual visuals for every wheel. This process is very similar to the one before.

  1. Add a child Object to the Models Object called WheelVisuals.

Note

In this tutorial we will add only one wheel, but you should repeat the step for all 4 wheels. That is, follow the instructions that follow this message for every wheel your Vehicle has.

  1. Add a child Object to the WheelVisuals Object called FrontLeftWheel.

  2. Add a child Object to the FrontLeftWheel Object called WheelFrontL. This Object will contain the actual wheel part.

  3. Add a Mesh Filter and select the wheel mesh.

  4. Add a Mesh Renderer and select the wheel materials.

  5. Repeat the steps before to add Breaks.

    The same way you have added the WheelFrontL Object now add the WheelFrontLBreaks. Naturally you will have to adjust the mesh and materials used as they will be different for breaks than for the wheel.

    Your final break configuration should look similar to the one following.

  6. Set the FrontLeftWheel parent Object transformation to position the wheel in correct place.

Successful configuration

If you have done everything right your WheelVisuals Object should look similar to the one following.

"},{"location":"Components/Vehicle/AddNewVehicle/AddVisualElements/#move-the-models","title":"Move the models","text":"

The last step to correctly configure Vehicle models is to shift them so that the EgoVehicle origin is in the center of fixed axis.

This means you need to shift the whole Models Object accordingly (change the position fields in transformation).

Tip

Add a dummy Object as a child to the EgoVehicle Object (the same as in steps before) so it is located in the origin of the EgoVehicle.

Now move Models around relative to the dummy - change position in the Inspector view. The dummy will help you see when the fixed axis (in case of the Lexus from example it is the rear axis) is aligned with origin of EgoVehicle.

In the end delete the dummy Object as it is no longer needed.

"},{"location":"Components/Vehicle/CustomizeSlip/","title":"Customize Slip","text":""},{"location":"Components/Vehicle/CustomizeSlip/#customize-slip","title":"Customize slip","text":"

By attaching a GroundSlipMutiplier.cs script to a collider (trigger), you can change the slip of the vehicle within the range of that collider.

"},{"location":"Components/Vehicle/CustomizeSlip/#sample-scene","title":"Sample scene","text":"

Assets\\AWSIM\\Scenes\\Samples\\VehicleSlipSample.unity

"},{"location":"Components/Vehicle/CustomizeSlip/#how-to-setup","title":"How to setup","text":"
  1. Create collider
  2. Check IsTrigger
  3. Change properties of FowardSlip and SidewaySlip.
"},{"location":"Components/Vehicle/EgoVehicle/","title":"Ego Vehicle","text":""},{"location":"Components/Vehicle/EgoVehicle/#introduction","title":"Introduction","text":"

EgoVehicle is a playable object that simulates a vehicle that can autonomously move around the scene. It has components (scripts) that make it possible to control it by keyboard or by Autoware (using ROS2 communication). Moreover, it provides sensory data needed for self-localization in space and detection of objects in the surrounding environment.

The default prefab EgoVehicle was developed using a Lexus RX450h 2015 vehicle model with a configured sample sensor kit.

Own EgoVehicle prefab

If you would like to develop your own EgoVehicle prefab, we encourage you to read this tutorial.

"},{"location":"Components/Vehicle/EgoVehicle/#supported-features","title":"Supported features","text":"

This vehicle model was created for Autoware simulation, and assuming that Autoware has already created a gas pedal map, this vehicle model uses acceleration as an input value. It has the following features:

AutowareSimulation

If you would like to see how EgoVehicle works or run some tests, we encourage you to familiarize yourself with the AutowareSimulation scene described in this section.

"},{"location":"Components/Vehicle/EgoVehicle/#lexus-rx450h-2015-parameters","title":"Lexus RX450h 2015 parameters","text":"Parameter Value Unit Mass \\(1500\\) \\(kg\\) Wheel base \\(2.5\\) \\(m\\) Tread width \\(Ft = 1.8; Rr = 1.8\\) \\(m\\) Center of Mass position \\(x = 0; y = 0.5; z = 0\\) \\(m\\) Moment of inertia \\(\\mathrm{yaw} = 2000; \\mathrm{roll} = 2000; \\mathrm{pitch} = 700\\) \\(kg \\cdot m^2\\) Spring rate \\(Ft = 55000; Rr = 48000\\) \\(N\\) Damper rate \\(Ft = 3000; Rr = 2500\\) \\(\\frac{N}{s}\\) Suspension stroke \\(Ft = 0.2; Rr = 0.2\\) \\(m\\) Wheel radius \\(0.365\\) \\(m\\)

Vehicle inertia

In general, measuring the moment of inertia is not easy, and past papers published by NHTSA are helpful. Measured Vehicle Inertial Parameters - NHTSA 1998

"},{"location":"Components/Vehicle/EgoVehicle/#prefab-and-fbx","title":"Prefab and Fbx","text":"

Prefab can be found under the following path:

Assets/AWSIM/Prefabs/NPCs/Vehicles/Lexus RX450h 2015 Sample Sensor.prefab\n

EgoVehicle name

In order to standardize the documentation, the name EgoVehicle will be used in this section as the equivalent of the prefab named Lexus RX450h 2015 2015 Sample Sensor.

EgoVehicle prefab has the following content:

As you can see, it consists of 3 parents for GameObjects:

All objects are described in the sections below.

"},{"location":"Components/Vehicle/EgoVehicle/#visual-elements","title":"Visual elements","text":"

Prefab is developed using models available in the form of *.fbx files. The visuals elements have been loaded from the appropriate *.fbx file and are aggregated and added in object Models.

*.fbx file for Lexus RX450h 2015 is located under the following path:

Assets/AWSIM/Models/Vehicles/Lexus RX450h 2015.fbx\n

Models object has the following content:

As you can see, the additional visual element is XX1 Sensor Kit.

It was also loaded from the *.fbx file which can be found under the following path:

Assets/AWSIM/Models/Sensors/XX1 Sensor Kit.fbx\n

Lexus RX450h 2015.fbx

The content of a sample *.fbx file is presented below, all elements except Collider have been added to the prefab as visual elements of the vehicle. Collider is used as the Mesh source for the Mesh Collider in the BodyCollider object.

"},{"location":"Components/Vehicle/EgoVehicle/#link-in-the-default-scene","title":"Link in the default Scene","text":"

The default scene contains a single Lexus RX450h 2015 Sample Sensor prefab that is added as a child of the EgoVehicle GameObject.

In EgoVehicle prefab, the local coordinate system of the vehicle (main prefab link) should be defined in the axis of the rear wheels projected onto the ground - in the middle of the distance between them. This aspect holds significance when characterizing the dynamics of the object, as it provides convenience in terms of describing its motion and control.

"},{"location":"Components/Vehicle/EgoVehicle/#components","title":"Components","text":"

There are several components responsible for the full functionality of Vehicle:

Scripts can be found under the following path:

Assets/AWSIM/Scripts/Vehicles/*\n
"},{"location":"Components/Vehicle/EgoVehicle/#architecture","title":"Architecture","text":"

The EgoVehicle architecture - with dependencies - is presented on the following diagram.

The communication between EgoVehicle components is presented on two different diagrams - a flow diagram and a sequence diagram.

The flow diagram presents a flow of information between the EgoVehicle components.

The sequence diagram provides a deeper insight in how the communication is structured and what are the steps taken by each component. Some tasks performed by the elements are presented for clarification.

Sequence diagram

Please keep in mind, that Autoware message callbacks and the update loop present on the sequence diagram are executed independently and concurrently. One thing they have in common are resources - the Vehicle (script).

"},{"location":"Components/Vehicle/EgoVehicle/#com","title":"CoM","text":"

CoM (Center of Mass) is an additional link that is defined to set the center of mass in the Rigidbody. The Vehicle (script) is responsible for its assignment. This measure should be defined in accordance with reality. Most often, the center of mass of the vehicle is located in its center, at the height of its wheel axis - as shown below.

"},{"location":"Components/Vehicle/EgoVehicle/#colliders","title":"Colliders","text":"

Colliders are used to ensure collision between objects. In EgoVehicle, the main Collider collider and colliders in Wheels GameObject for each wheel were added.

Colliders object has the following content:

"},{"location":"Components/Vehicle/EgoVehicle/#bodycollider","title":"BodyCollider","text":"

Collider is a vehicle object responsible for ensuring collision with other objects. Additionally, it can be used to detect these collisions. The MeshCollider takes a Mesh of object and builds its Collider based on it. The Mesh for the Collider was also loaded from the *.fbx file similarly to the visual elements.

"},{"location":"Components/Vehicle/EgoVehicle/#wheels-colliders","title":"Wheels Colliders","text":"

WheelsColliders are an essential elements from the point of view of driving vehicles on the road. They are the only ones that have contact with the roads and it is important that they are properly configured. Each vehicle, apart from the visual elements related to the wheels, should also have 4 colliders - one for each wheel.

Wheel (script) provides a reference to the collider and visual object for the particular wheel. Thanks to this, the Vehicle (script) has the ability to perform certain actions on each of the wheels, such as:

Wheel Collider Config (script) has been developed to prevent inspector entry for WheelCollider which ensures that friction is set to 0 and only wheel suspension and collisions are enabled.

Wheel Collider Config

For a better understanding of the meaning of WheelCollider we encourage you to read this manual.

"},{"location":"Components/Vehicle/EgoVehicle/#rigidbody","title":"Rigidbody","text":"

Rigidbody ensures that the object is controlled by the physics engine. The Mass of the vehicle should approximate its actual weight. In order for the vehicle to physically interact with other objects - react to collisions, Is Kinematic must be turned off. The Use Gravity should be turned on - to ensure the correct behavior of the body during movement. In addition, Interpolate should be turned on to ensure the physics engine's effects are smoothed out.

"},{"location":"Components/Vehicle/EgoVehicle/#reflection-probe","title":"Reflection Probe","text":"

Reflection Probe is added to EgoVehicle prefab to simulate realistic reflections in a scene. It is a component that captures and stores information about the surrounding environment and uses that information to generate accurate reflections on objects in real-time. The values in the component are set as default.

HD Additional Reflection Data (script) is additional component used to store settings for HDRP's reflection probes and is added automatically.

"},{"location":"Components/Vehicle/EgoVehicle/#urdf-and-sensors","title":"URDF and Sensors","text":"

URDF (Unified Robot Description Format) is equivalent to the simplified URDF format used in ROS2. This format allows to define the positions of all sensors of the vehicle in relation to its local coordinate system. URDF is built using multiple GameObjects as children appropriately transformed with relation to its parent.

A detailed description of the URDF structure and sensors added to prefab Lexus RX450h 2015 is available in this section.

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-script","title":"Vehicle (script)","text":"

Vehicle (script) provides an inputs that allows the EgoVehicle to move. Script inputs provides the ability to set the acceleration of the vehicle and the steering angle of its wheels, taking into account the effects of suspension and gravity. It also provides an input to set the gear in the gearbox and to control the turn signals. Script inputs can be set by one of the following scripts: Vehicle Ros Input (script) or Vehicle Keyboard Input (script).

The script performs several steps periodically:

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":"

The script uses the CoM link reference to assign the center of mass of the vehicle to the Rigidbody. In addiction, Use inertia allows to define the inertia tensor for component Rigidbody - by default it is disabled.

Physics Settings - allows to set values used to control vehicle physics:

Axles Settings contains references to (Wheel (script)) scripts to control each wheel. Thanks to them, the Vehicle (script) is able to set their steering angle and accelerations.

Input Settings - allows to set limits for values on script input:

Inputs - are only used as a preview of the currently set values in the script input:

"},{"location":"Components/Vehicle/EgoVehicle/#input-data","title":"Input Data","text":"Category Type Description AccelerationInput float Acceleration input (m/s^2). On the plane, output the force that will result in this acceleration. On a slope, it is affected by the slope resistance, so it does not match the input. SteerAngleInput float Vehicle steering input (degree). Negative steers left, positive right AutomaticShiftInput enumeration Vehicle gear shift input (AT).Values: PARKING, REVERSE, NEUTRAL, DRIVE. SignalInput enumeration Vehicle turn signal input.Values: NONE, LEFT, RIGHT, HAZARD."},{"location":"Components/Vehicle/EgoVehicle/#output-data","title":"Output data","text":"Category Type Description LocalAcceleration Vector3 Acceleration(m/s^2) in the local coordinate system of the vehicle Speed float Vehicle speed (m/s). SteerAngle float Vehicle steering angle (degree). Signal enumeration Vehicle turn signal. Velocity Vector3 Vehicle velocity (m/s) LocalVelocity Vector3 Vehicle local velocity (m/s) AngularVelocity Vector3 Vehicle angular velocity (rad/s)

The acceleration or deceleration of the vehicle is determined by AutomaticShiftInput and AccelerationInput. The vehicle will not move in the opposite direction of the (DRIVE or REVERSE) input.

Example

Sample vehicle behaves:

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-ros-script","title":"Vehicle Ros (script)","text":"

Vehicle Ros (script) is responsible for subscribing to messages that are vehicle control commands. The values read from the message are set on the inputs of the Vehicle (script) script.

The concept for vehicle dynamics is suitable for Autoware's autoware_auto_control_msgs/AckermannControlCommand and autoware_auto_vehicle_msgs/GearCommand messages interface usage. The script sets gear, steering angle of wheels and acceleration of the vehicle (read from the aforementioned messages) to the Vehicle (script) input. In the case of VehicleEmergencyStamped message it sets the absolute acceleration equal to 0. In addition, also through Vehicle (script), the appropriate lights are turned on and off depending on TurnIndicatorsCommand and HazardLightsCommand messages.

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level_1","title":"Elements configurable from the editor level","text":""},{"location":"Components/Vehicle/EgoVehicle/#subscribed-topics","title":"Subscribed Topics","text":" Category Topic Message type Frequency (Autoware dependent) TurnIndicatorsCommand /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/TurnIndicatorsCommand 10 HazardLightsCommand /control/command/hazard_lights_cmd autoware_auto_vehicle_msgs/HazardLightsCommand 10 AckermannControlCommand /control/command/control_cmd autoware_auto_control_msgs/AckermannControlCommand 60 GearCommand /control/command/gear_cmd autoware_auto_vehicle_msgs/GearCommand 10 VehicleEmergencyStamped /control/command/emergency_cmd tier4_vehicle_msgs/msg/VehicleEmergencyStamped 60

ROS2 Topics

If you would like to know all the topics used in communication Autoware with AWSIM, we encourage you to familiarize yourself with this section

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-keyboard-script","title":"Vehicle Keyboard (script)","text":"

Vehicle Keyboard (script) allows EgoVehicle to be controlled by the keyboard. Thanks to this, it is possible to switch on the appropriate gear of the gearbox, turn the lights on/off, set the acceleration and steering of the wheels. It's all set in the Vehicle (script) of the object assigned in the Vehicle field. The table below shows the available control options.

Button Option d Switch to move forward (drive gear) r Switch to move backwards (reverse gear) n Switch to neutral p Switch to parking gear UP ARROW Forward acceleration DOWN ARROW Reverse acceleration (decelerate) LEFT/RIGHT ARROW Turning 1 Turn left blinker on (right off) 2 Turn right blinker on (left off) 3 Turn on hazard lights 4 Turn off blinker or hazard lights

WASD

Controlling the movement of the vehicle with WASD as the equivalent of arrow keys is acceptable, but remember that the d button engages the drive gear.

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level_2","title":"Elements configurable from the editor level","text":"

Value limits

Max Acceleration and Max Steer Angle values greater than those set in the Vehicle (script) are limited by the script itself - they will not be exceeded.

"},{"location":"Components/Vehicle/EgoVehicle/#vehicle-visual-effect-script","title":"Vehicle Visual Effect (script)","text":"

This part of the settings is related to the configuration of the emission of materials when a specific lighting is activated. There are 4 types of lights: Brake, Left Turn Signal, Right Turn Signal and Reverse. Each of the lights has its visual equivalent in the form of a Mesh. In the case of EgoVehicle, each light type has its own GameObject which contains the Mesh assigned.

For each type of light, the appropriate Material Index (equivalent of element index in mesh) and Lighting Color are assigned - yellow for Turn Signals, red for Break and white for Reverse.

Lighting Intensity values are also configured - the greater the value, the more light will be emitted. This value is related to Lighting Exposure Weight parameter that is a exposure weight - the lower the value, the more light is emitted.

All types of lighting are switched on and off depending on the values obtained from the Vehicle (script) of the vehicle, which is assigned in the Vehicle field.

"},{"location":"Components/Vehicle/EgoVehicle/#elements-configurable-from-the-editor-level_3","title":"Elements configurable from the editor level","text":""},{"location":"Components/Vehicle/FollowCamera/","title":"FollowCamera","text":""},{"location":"Components/Vehicle/FollowCamera/#introduction","title":"Introduction","text":"

The FollowCamera component is designed to track a specified target object within the scene. It is attached to the main camera and maintains a defined distance and height from the target. Additionally, it offers the flexibility of custom rotation around the target as an optional feature.

"},{"location":"Components/Vehicle/FollowCamera/#elements-configurable-from-the-editor-level","title":"Elements configurable from the editor level","text":""},{"location":"Components/Vehicle/FollowCamera/#required-member","title":"Required member","text":""},{"location":"Components/Vehicle/FollowCamera/#base-setttings","title":"Base Setttings","text":""},{"location":"Components/Vehicle/FollowCamera/#optional-movement-setttings","title":"Optional Movement Setttings","text":""},{"location":"Components/Vehicle/FollowCamera/#rotate-around-mode","title":"Rotate Around Mode","text":"

Camera rotation around the target can be activated by pressing the RotateAroundModeToggle key (default 'C' key). In this mode, the user can manually adjust the camera view at run-time using the mouse. To deactivate the Rotate Around mode, press the RotateAroundModeToggle key once more.

In the Rotate Around Mode camera view can be controlled as follows:

"},{"location":"Components/Vehicle/FollowCamera/#optional","title":"Optional","text":"

An optional prefab featuring a UI panel, located at Assets/Prefabs/UI/MainCameraView.prefab, can be used to showcase a user guide. To integrate this prefab into the scene, drag and drop it beneath the Canvas object. This prefab displays instructions on how to adjust the camera view whenever the Rotate Around Mode is activated.

"},{"location":"Components/Vehicle/URDFAndSensors/","title":"URDF And Sensors","text":""},{"location":"Components/Vehicle/URDFAndSensors/#urdf-and-sensors","title":"URDF and Sensors","text":"

This section describes the placement of sensors in EgoVehicle on the example of a Lexus RX450h 2015 Sample Sensor prefab.

URDF (Unified Robot Description Format) is equivalent to the simplified URDF format used in ROS2. This format allows to define the positions of all sensors of the vehicle in relation to its main parent prefab coordinate system.

URDF is added directly to the main parent of the prefab and there are no transforms between these objects. It is built using multiple GameObjects as children appropriately transformed with relation to its parent.

The transforms in the URDF object are defined using the data from the sensor kit documentation used in the vehicle. Such data can be obtained from sensor kit packages for Autoware, for example: awsim_sensor_kit_launch - it is used in the AWSIM compatible version of Autoware. This package contains a description of transforms between coordinate systems (frames) in the form of *.yaml files: sensors_calibration and sensor_kit_calibration.

In the first file, the transform of the sensor kit frame (sensor_kit_base_link) relative to the local vehicle frame (base_link) is defined. In Unity, this transform is defined in the object Sensor Kit. While the second file contains a definition of the transformations of all sensors with respect to the sensor kit - they are described in the Sensor Kit subsections.

Transformations

Please note that the transformation Objects are intended to be a direct reflection of frames existing in ROS2. All frame Objects are defined as children of base_link and consist of nothing but a transformation - analogical to the one present in ROS2 (keep in mind the coordinate system conversion). The sensor Objects are added to the transformation Object with no transformation of their own.

Coordinate system conventions

Unity uses a left-handed convention for its coordinate system, while the ROS2 uses a right-handed convention. For this reason, you should remember to perform conversions to get the correct transforms.

"},{"location":"Components/Vehicle/URDFAndSensors/#base-link","title":"Base Link","text":"

Base Link (frame named base_link) is the formalized local coordinate system in URDF. All sensors that publish data specified in some frame present in Autoware are defined in relation to base_link. It is a standard practice in ROS, that base_link is a parent transformation of the whole robot and all robot parts are defined in some relation to the base_link.

If any device publishes data in the base_link frame - it is added as a direct child, with no additional transformation intermediate Object (PoseSensor is an example). However, if this device has its own frame, it is added as a child to its frame Object - which provides an additional transformation. The final transformation can consist of many intermediate transformation Objects. The frame Objects are added to the base_link (GnssSensor and its parent gnss_link are an example).

"},{"location":"Components/Vehicle/URDFAndSensors/#sensor-kit","title":"Sensor Kit","text":"

Sensor Kit (frame named sensor_kit_base_link) is a set of objects that consists of all simulated sensors that are physically present in an autonomous vehicle and have their own coordinate system (frame). This set of sensors has its own frame sensor_kit_base_link that is relative to the base_link.

In the Lexus RX450h 2015 Sample Sensor prefab, it is added to the base_link GameObject with an appropriately defined transformation. It acts as an intermediate frame GameObject. Sensor Kit is located on the top of the vehicle, so it is significantly shifted about the Oy and Oz axes. Sensors can be defined directly in this Object (VelodyneVLP16 is an example), or have their own transformation Object added on top of the sensor_kit_base_link (like GnssSensor mentioned in the base_link section).

The sensors described in this subsection are defined in relation to the sensor_kit_base_link frame.

"},{"location":"Components/Vehicle/URDFAndSensors/#lidars","title":"LiDARs","text":"

LidarSensor is the component that simulates the LiDAR (Light Detection and Ranging) sensor. The LiDARs mounted on the top of autonomous vehicles are primarily used to scan the environment for localization in space and for detection and identification of obstacles. LiDARs placed on the left and right sides of the vehicle are mainly used to monitor the traffic lane and detect vehicles moving in adjacent lanes. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor prefab has one VelodyneVLP16 prefab sensor configured on the top of the vehicle, mainly used for location in space, but also for object recognition. Since the top LiDAR publishes data directly in the sensor_kit_base_link frame, the prefab is added directly to it - there is no transform. The other two remaining LiDARs are defined, but disabled - they do not provide data from space (but you can enable them!).

"},{"location":"Components/Vehicle/URDFAndSensors/#top","title":"Top","text":""},{"location":"Components/Vehicle/URDFAndSensors/#left-disabled","title":"Left - disabled","text":""},{"location":"Components/Vehicle/URDFAndSensors/#right-disabled","title":"Right - disabled","text":""},{"location":"Components/Vehicle/URDFAndSensors/#imu","title":"IMU","text":"

IMUSensor is a component that simulates an IMU (Inertial Measurement Unit) sensor. It measures acceleration and angular velocity of the EgoVehicle. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor has one such sensor located on the top of the vehicle. It is added to an Object tamagawa/imu_link that matches its frame_id and contains its transform with respect to sensor_kit_base_link. This transformation has no transition, but only rotation around the Oy and Oz axes. The transform is defined in such a way that its axis Oy points downwards - in accordance with the gravity vector.

"},{"location":"Components/Vehicle/URDFAndSensors/#gnss","title":"GNSS","text":"

GnssSensor is a component which simulates the position of vehicle computed by the Global Navigation Satellite. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor prefab has one such sensor located on top of the vehicle. It is added to an Object gnss_link that matches its frame_id and contains its transform with respect to sensor_kit_base_link. The frame is slightly moved back along the Oy and Oz axes.

"},{"location":"Components/Vehicle/URDFAndSensors/#camera","title":"Camera","text":"

CameraSensor is a component that simulates an RGB camera. Autonomous vehicles can be equipped with many cameras used for various purposes. A detailed description of this sensor is available in this section.

Lexus RX450h 2015 Sample Sensor prefab has one camera, positioned on top of the vehicle in such a way that the cameras field of view provides an image including traffic lights - the status of which must be recognized by Autoware. It is added to an Object traffic_light_left_camera/camera_link that matches its frame_id and contains its transform with respect to sensor_kit_base_link.

"},{"location":"Components/Vehicle/URDFAndSensors/#pose","title":"Pose","text":"

PoseSensor is a component which provides access to the current position and rotation of the EgoVehicle - is added as a ground truth.

The position and orientation of EgoVehicle is defined as the position of the frame base_link in the global frame, so this Object is added directly as its child without a transform.

"},{"location":"Components/Vehicle/URDFAndSensors/#vehiclesensor","title":"VehicleSensor","text":"

VehicleStatusSensor is a component that is designed to aggregate information about the current state of the EgoVehicle, such as the active control mode, vehicle speed, steering of its wheels, or turn signal status. A detailed description of this sensor is available in this section.

This Object is not strictly related to any frame, however, it is assumed as a sensor, therefore it is added to the URDF.

"},{"location":"DeveloperGuide/Contact/","title":"Contact","text":""},{"location":"DeveloperGuide/Contact/#contact","title":"Contact","text":"

English/\u65e5\u672c\u8a9e OK

e-mail : takatoki.makino@tier4.jp

twitter : https://twitter.com/mackierx111

"},{"location":"DeveloperGuide/Documentation/","title":"Documentation","text":""},{"location":"DeveloperGuide/Documentation/#documentation","title":"Documentation","text":"

This document uses Material for MkDocs.

"},{"location":"DeveloperGuide/Documentation/#local-hosting","title":"Local hosting","text":"

1 Install Material for MkDocs.

$ pip install mkdocs-material\n
2 Hosting on localhost.
$ cd AWSIM\n$ mkdocs serve\nINFO     -  Building documentation...\nINFO     -  Cleaning site directory\nINFO     -  Documentation built in 0.16 seconds\nINFO     -  [03:13:22] Watching paths for changes: 'docs', 'mkdocs.yml'\nINFO     -  [03:13:22] Serving on http://127.0.0.1:8000/\n

3 Access http://127.0.0.1:8000/ with a web browser.

For further reference see Material for MkDocs - Getting started.

"},{"location":"DeveloperGuide/Documentation/#mkdocs-files","title":"MkDocs files","text":"

Use the following /docs directory and mkdocs.yml for new documentation files.

AWSIM\n\u251c\u2500 docs/                // markdown and image file for each document.\n\u2514\u2500 mkdocs.yml           // mkdocs config.\n
Create one directory per document. For example, the directory structure of this \"Documentation\" page might look like this.
AWSIM\n\u2514\u2500 docs/                            // Root of all documents\n    \u2514\u2500 DeveloperGuide               // Category\n        \u2514\u2500 Documentation            // Root of each document\n            \u251c\u2500 index.md             // Markdown file\n            \u2514\u2500 image_0.png          // Images used in markdown file\n
"},{"location":"DeveloperGuide/Documentation/#deploy-hosting","title":"Deploy & Hosting","text":"

When docs are pushed to the main branch, they are deployed to GitHub Pages using GitHub Actions. See also Material for MkDocs - Publishing your site

"},{"location":"DeveloperGuide/EditorSetup/Graphy/","title":"Graphy Asset Setup","text":""},{"location":"DeveloperGuide/EditorSetup/Graphy/#graphy-asset-setup","title":"Graphy Asset Setup","text":""},{"location":"DeveloperGuide/EditorSetup/Graphy/#add-graphy-from-asset-store","title":"Add Graphy from Asset Store","text":"

1) Go to Unity Asset Store and add Graphy to personal assets.

Graphy Asset Store link:

"},{"location":"DeveloperGuide/EditorSetup/Graphy/#add-graphy-to-the-unity-editor","title":"Add Graphy to the Unity Editor","text":"

1) Open up the Unity Editor. - Open up a temporary new scene by File -> New Scene -> Empty(Built-in) -> Create - This is due to a bug with Unity crashing on certain Linux configurations. - Once the package is imported, you can open up the desired scene. 2) Go to the Window menu and select Package Manager. 3) Make sure the My Assets tab is selected from the top left of the Package Manager window. 4) Find & select the Graphy from the list and click Download or Import from the bottom left of the Package Manager window. 5) There will be a popup window showing contents of the package. Click Import to add Graphy to the project.

After the import is complete, you should be able to see Graphy prefab in the Hierarchy window of the AutowareSimulation scene. If it's missing you can add it to scene by following steps below.

"},{"location":"DeveloperGuide/EditorSetup/Graphy/#integrating-graphy-into-custom-scenes","title":"Integrating Graphy into custom scenes","text":"

Graphy is pre-integrated within the AutowareSimulation scene. To incorporate Graphy into your own custom scenes, please adhere to the following steps:

1) Go to the Assets folder in the Project window. 2) Open Graphy > Prefab folder. 3) Drag the Graphy prefab into the scene. 4) You can customize your Graphy by selecting the Graphy prefab in the scene and changing the settings in the inspector window.

"},{"location":"DeveloperGuide/EditorSetup/Graphy/#useful-links","title":"Useful links:","text":"

Unity Package manager:

Graphy Github page:

Graphy Documentation:

"},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/","title":"Rider Configuration","text":""},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#jetbrains-rider-setup-with-unity","title":"JetBrains Rider setup with Unity","text":""},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#install-jetbrains-rider","title":"Install JetBrains Rider:","text":"

Follow the steps in:

sudo snap install rider --classic\n
"},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#install-net-sdk","title":"Install .NET SDK:","text":"

Follow the steps in:

# Get Ubuntu version\ndeclare repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '\"'; fi)\n\n# Download Microsoft signing key and repository\nwget https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb -O packages-microsoft-prod.deb\n\n# Install Microsoft signing key and repository\nsudo dpkg -i packages-microsoft-prod.deb\n\n# Clean up\nrm packages-microsoft-prod.deb\n\n# Update packages\nsudo apt update\nsudo apt install dotnet-sdk-8.0\n
"},{"location":"DeveloperGuide/EditorSetup/JetBrainsRider/#connect-rider-to-unity-editor","title":"Connect Rider to Unity Editor:","text":"

Follow the steps in:

1) Open an existing Unity project in the Unity Editor.\n\n2) Select Edit > Preferences (Unity > Settings on macOS) and open the External Tools page.\n\n3) In the External Script Editor, select a \"Rider\" installation.\n\n4) In the Preferences window, click \"Regenerate project files\" under the External Tools section.\n\n5) While still in the Unity Editor, right-click anywhere in the Project view and select Open C# Project.\n\n6) Rider will start automatically and open the solution related to this Unity project. Once the solution is loaded, Rider and the Unity Editor become connected. The Unity icon on the toolbar shows the current connection status:\n
"},{"location":"DeveloperGuide/EditorSetup/VSCode/","title":"VSCode Configuration","text":""},{"location":"DeveloperGuide/EditorSetup/VSCode/#visual-studio-code-setup-with-unity","title":"Visual Studio Code setup with Unity","text":""},{"location":"DeveloperGuide/EditorSetup/VSCode/#install-visual-studio-code","title":"Install Visual Studio Code","text":"

Follow the steps in: - https://code.visualstudio.com/docs/setup/linux

# Install the keys and repository\nsudo apt-get install wget gpg\nwget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg\nsudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg\nsudo sh -c 'echo \"deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main\" > /etc/apt/sources.list.d/vscode.list'\nrm -f packages.microsoft.gpg\n\n# Then update the package cache and install the package using:\nsudo apt install apt-transport-https\nsudo apt update\nsudo apt install code\n
"},{"location":"DeveloperGuide/EditorSetup/VSCode/#install-the-dotnet-sdk","title":"Install the Dotnet SDK","text":"

Follow the steps in: - https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#register-the-microsoft-package-repository

# Get Ubuntu version\ndeclare repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '\"'; fi)\n\n# Download Microsoft signing key and repository\nwget https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb -O packages-microsoft-prod.deb\n\n# Install Microsoft signing key and repository\nsudo dpkg -i packages-microsoft-prod.deb\n\n# Clean up\nrm packages-microsoft-prod.deb\n\n# Update packages\nsudo apt update\n\nsudo apt install dotnet-sdk-8.0\n
"},{"location":"DeveloperGuide/EditorSetup/VSCode/#install-the-extensions","title":"Install the extensions","text":"

Follow the steps in: - https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csdevkit - https://marketplace.visualstudio.com/items?itemName=VisualStudioToolsForUnity.vstuc - https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp

Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter. - ext install ms-dotnettools.csdevkit Repeat for: - ext install VisualStudioToolsForUnity.vstuc - ext install ms-dotnettools.csharp

"},{"location":"DeveloperGuide/EditorSetup/VSCode/#configure-the-unity","title":"Configure the Unity","text":"

It should all be configured now. You can either open up a script by double clicking in the Project window in Unity or by opening up the project in VS Code: - Assets -> Open C# Project

Syntax highlighting and CTRL-click navigation should work out of the box.

For more advanced features such as debugging, check the Unity Development with VS Code Documentation.

"},{"location":"DeveloperGuide/EditorSetup/VSCode/#additional-notes","title":"Additional notes","text":"

In the AWSIM project, the package Visual Studio Editor is already installed to satisfy the requirement from the Unity for Visual Studio Code extension.

"},{"location":"DeveloperGuide/HowToContribute/","title":"How to Contribute","text":""},{"location":"DeveloperGuide/HowToContribute/#how-to-contribute","title":"How to Contribute","text":"

Everyone is welcome!

"},{"location":"DeveloperGuide/HowToContribute/#how-can-i-get-help","title":"How can I get help?","text":"

Do not open issues for general support questions as we want to keep GitHub issues for confirmed bug reports. Instead, open a discussion in the Q&A category. The trouble shooting page at AWSIM and at Autoware will be also helpful.

"},{"location":"DeveloperGuide/HowToContribute/#issue","title":"Issue","text":"

Before you post an issue, please search Issues and Discussions Q&A catecory to check if it is not a known issue.

This page is helpful how to create an issue from a repository.

"},{"location":"DeveloperGuide/HowToContribute/#bug-report","title":"Bug report","text":"

If you find a new bug, please create an issue here

"},{"location":"DeveloperGuide/HowToContribute/#feature-request","title":"Feature request","text":"

If you propose a new feature or have an idea, please create an issue here

"},{"location":"DeveloperGuide/HowToContribute/#task","title":"Task","text":"

If you have plan to contribute AWSIM Labs, please create an issue here.

"},{"location":"DeveloperGuide/HowToContribute/#question","title":"Question","text":""},{"location":"DeveloperGuide/HowToContribute/#pull-requests","title":"Pull requests","text":"

If you have an idea to improve the simulation, you can submit a pull request. The following process should be followed:

  1. Create a derived branch (feature/***) from the main branch.
  2. Create a pull request to the main branch.

Please keep the following in mind, while developing new features:

"},{"location":"DeveloperGuide/License/","title":"License","text":""},{"location":"DeveloperGuide/License/#awsim-licenses","title":"AWSIM Licenses","text":"

AWSIM License applies to tier4/AWSIM repositories and all content contained in the Releases.

"},{"location":"DeveloperGuide/License/#apache20-license","title":"Apache2.0 License","text":"
**********************************************************************************\n\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2022 TIER IV, Inc.\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n
"},{"location":"DeveloperGuide/License/#cc-by-nc-license","title":"CC BY-NC License","text":"
**********************************************************************************\n\nAttribution-NonCommercial 4.0 International\n\n=======================================================================\n\nCreative Commons Corporation (\"Creative Commons\") is not a law firm and\ndoes not provide legal services or legal advice. Distribution of\nCreative Commons public licenses does not create a lawyer-client or\nother relationship. Creative Commons makes its licenses and related\ninformation available on an \"as-is\" basis. Creative Commons gives no\nwarranties regarding its licenses, any material licensed under their\nterms and conditions, or any related information. Creative Commons\ndisclaims all liability for damages resulting from their use to the\nfullest extent possible.\n\nUsing Creative Commons Public Licenses\n\nCreative Commons public licenses provide a standard set of terms and\nconditions that creators and other rights holders may use to share\noriginal works of authorship and other material subject to copyright\nand certain other rights specified in the public license below. The\nfollowing considerations are for informational purposes only, are not\nexhaustive, and do not form part of our licenses.\n\n     Considerations for licensors: Our public licenses are\n     intended for use by those authorized to give the public\n     permission to use material in ways otherwise restricted by\n     copyright and certain other rights. Our licenses are\n     irrevocable. Licensors should read and understand the terms\n     and conditions of the license they choose before applying it.\n     Licensors should also secure all rights necessary before\n     applying our licenses so that the public can reuse the\n     material as expected. Licensors should clearly mark any\n     material not subject to the license. This includes other CC-\n     licensed material, or material used under an exception or\n     limitation to copyright. More considerations for licensors:\n    wiki.creativecommons.org/Considerations_for_licensors\n\n     Considerations for the public: By using one of our public\n     licenses, a licensor grants the public permission to use the\n     licensed material under specified terms and conditions. If\n     the licensor's permission is not necessary for any reason--for\n     example, because of any applicable exception or limitation to\n     copyright--then that use is not regulated by the license. Our\n     licenses grant only permissions under copyright and certain\n     other rights that a licensor has authority to grant. Use of\n     the licensed material may still be restricted for other\n     reasons, including because others have copyright or other\n     rights in the material. A licensor may make special requests,\n     such as asking that all changes be marked or described.\n     Although not required by our licenses, you are encouraged to\n     respect those requests where reasonable. More considerations\n     for the public:\n    wiki.creativecommons.org/Considerations_for_licensees\n\n=======================================================================\n\nCreative Commons Attribution-NonCommercial 4.0 International Public\nLicense\n\nBy exercising the Licensed Rights (defined below), You accept and agree\nto be bound by the terms and conditions of this Creative Commons\nAttribution-NonCommercial 4.0 International Public License (\"Public\nLicense\"). To the extent this Public License may be interpreted as a\ncontract, You are granted the Licensed Rights in consideration of Your\nacceptance of these terms and conditions, and the Licensor grants You\nsuch rights in consideration of benefits the Licensor receives from\nmaking the Licensed Material available under these terms and\nconditions.\n\n\nSection 1 -- Definitions.\n\n  a. Adapted Material means material subject to Copyright and Similar\n     Rights that is derived from or based upon the Licensed Material\n     and in which the Licensed Material is translated, altered,\n     arranged, transformed, or otherwise modified in a manner requiring\n     permission under the Copyright and Similar Rights held by the\n     Licensor. For purposes of this Public License, where the Licensed\n     Material is a musical work, performance, or sound recording,\n     Adapted Material is always produced where the Licensed Material is\n     synched in timed relation with a moving image.\n\n  b. Adapter's License means the license You apply to Your Copyright\n     and Similar Rights in Your contributions to Adapted Material in\n     accordance with the terms and conditions of this Public License.\n\n  c. Copyright and Similar Rights means copyright and/or similar rights\n     closely related to copyright including, without limitation,\n     performance, broadcast, sound recording, and Sui Generis Database\n     Rights, without regard to how the rights are labeled or\n     categorized. For purposes of this Public License, the rights\n     specified in Section 2(b)(1)-(2) are not Copyright and Similar\n     Rights.\n  d. Effective Technological Measures means those measures that, in the\n     absence of proper authority, may not be circumvented under laws\n     fulfilling obligations under Article 11 of the WIPO Copyright\n     Treaty adopted on December 20, 1996, and/or similar international\n     agreements.\n\n  e. Exceptions and Limitations means fair use, fair dealing, and/or\n     any other exception or limitation to Copyright and Similar Rights\n     that applies to Your use of the Licensed Material.\n\n  f. Licensed Material means the artistic or literary work, database,\n     or other material to which the Licensor applied this Public\n     License.\n\n  g. Licensed Rights means the rights granted to You subject to the\n     terms and conditions of this Public License, which are limited to\n     all Copyright and Similar Rights that apply to Your use of the\n     Licensed Material and that the Licensor has authority to license.\n\n  h. Licensor means the individual(s) or entity(ies) granting rights\n     under this Public License.\n\n  i. NonCommercial means not primarily intended for or directed towards\n     commercial advantage or monetary compensation. For purposes of\n     this Public License, the exchange of the Licensed Material for\n     other material subject to Copyright and Similar Rights by digital\n     file-sharing or similar means is NonCommercial provided there is\n     no payment of monetary compensation in connection with the\n     exchange.\n\n  j. Share means to provide material to the public by any means or\n     process that requires permission under the Licensed Rights, such\n     as reproduction, public display, public performance, distribution,\n     dissemination, communication, or importation, and to make material\n     available to the public including in ways that members of the\n     public may access the material from a place and at a time\n     individually chosen by them.\n\n  k. Sui Generis Database Rights means rights other than copyright\n     resulting from Directive 96/9/EC of the European Parliament and of\n     the Council of 11 March 1996 on the legal protection of databases,\n     as amended and/or succeeded, as well as other essentially\n     equivalent rights anywhere in the world.\n\n  l. You means the individual or entity exercising the Licensed Rights\n     under this Public License. Your has a corresponding meaning.\n\n\nSection 2 -- Scope.\n\n  a. License grant.\n\n       1. Subject to the terms and conditions of this Public License,\n          the Licensor hereby grants You a worldwide, royalty-free,\n          non-sublicensable, non-exclusive, irrevocable license to\n          exercise the Licensed Rights in the Licensed Material to:\n\n            a. reproduce and Share the Licensed Material, in whole or\n               in part, for NonCommercial purposes only; and\n\n            b. produce, reproduce, and Share Adapted Material for\n               NonCommercial purposes only.\n\n       2. Exceptions and Limitations. For the avoidance of doubt, where\n          Exceptions and Limitations apply to Your use, this Public\n          License does not apply, and You do not need to comply with\n          its terms and conditions.\n\n       3. Term. The term of this Public License is specified in Section\n          6(a).\n\n       4. Media and formats; technical modifications allowed. The\n          Licensor authorizes You to exercise the Licensed Rights in\n          all media and formats whether now known or hereafter created,\n          and to make technical modifications necessary to do so. The\n          Licensor waives and/or agrees not to assert any right or\n          authority to forbid You from making technical modifications\n          necessary to exercise the Licensed Rights, including\n          technical modifications necessary to circumvent Effective\n          Technological Measures. For purposes of this Public License,\n          simply making modifications authorized by this Section 2(a)\n          (4) never produces Adapted Material.\n\n       5. Downstream recipients.\n\n            a. Offer from the Licensor -- Licensed Material. Every\n               recipient of the Licensed Material automatically\n               receives an offer from the Licensor to exercise the\n               Licensed Rights under the terms and conditions of this\n               Public License.\n\n            b. No downstream restrictions. You may not offer or impose\n               any additional or different terms or conditions on, or\n               apply any Effective Technological Measures to, the\n               Licensed Material if doing so restricts exercise of the\n               Licensed Rights by any recipient of the Licensed\n               Material.\n\n       6. No endorsement. Nothing in this Public License constitutes or\n          may be construed as permission to assert or imply that You\n          are, or that Your use of the Licensed Material is, connected\n          with, or sponsored, endorsed, or granted official status by,\n          the Licensor or others designated to receive attribution as\n          provided in Section 3(a)(1)(A)(i).\n\n  b. Other rights.\n\n       1. Moral rights, such as the right of integrity, are not\n          licensed under this Public License, nor are publicity,\n          privacy, and/or other similar personality rights; however, to\n          the extent possible, the Licensor waives and/or agrees not to\n          assert any such rights held by the Licensor to the limited\n          extent necessary to allow You to exercise the Licensed\n          Rights, but not otherwise.\n\n       2. Patent and trademark rights are not licensed under this\n          Public License.\n\n       3. To the extent possible, the Licensor waives any right to\n          collect royalties from You for the exercise of the Licensed\n          Rights, whether directly or through a collecting society\n          under any voluntary or waivable statutory or compulsory\n          licensing scheme. In all other cases the Licensor expressly\n          reserves any right to collect such royalties, including when\n          the Licensed Material is used other than for NonCommercial\n          purposes.\n\n\nSection 3 -- License Conditions.\n\nYour exercise of the Licensed Rights is expressly made subject to the\nfollowing conditions.\n\n  a. Attribution.\n\n       1. If You Share the Licensed Material (including in modified\n          form), You must:\n\n            a. retain the following if it is supplied by the Licensor\n               with the Licensed Material:\n\n                 i. identification of the creator(s) of the Licensed\n                    Material and any others designated to receive\n                    attribution, in any reasonable manner requested by\n                    the Licensor (including by pseudonym if\n                    designated);\n\n                ii. a copyright notice;\n\n               iii. a notice that refers to this Public License;\n\n                iv. a notice that refers to the disclaimer of\n                    warranties;\n\n                 v. a URI or hyperlink to the Licensed Material to the\n                    extent reasonably practicable;\n\n            b. indicate if You modified the Licensed Material and\n               retain an indication of any previous modifications; and\n\n            c. indicate the Licensed Material is licensed under this\n               Public License, and include the text of, or the URI or\n               hyperlink to, this Public License.\n\n       2. You may satisfy the conditions in Section 3(a)(1) in any\n          reasonable manner based on the medium, means, and context in\n          which You Share the Licensed Material. For example, it may be\n          reasonable to satisfy the conditions by providing a URI or\n          hyperlink to a resource that includes the required\n          information.\n\n       3. If requested by the Licensor, You must remove any of the\n          information required by Section 3(a)(1)(A) to the extent\n          reasonably practicable.\n\n       4. If You Share Adapted Material You produce, the Adapter's\n          License You apply must not prevent recipients of the Adapted\n          Material from complying with this Public License.\n\n\nSection 4 -- Sui Generis Database Rights.\n\nWhere the Licensed Rights include Sui Generis Database Rights that\napply to Your use of the Licensed Material:\n\n  a. for the avoidance of doubt, Section 2(a)(1) grants You the right\n     to extract, reuse, reproduce, and Share all or a substantial\n     portion of the contents of the database for NonCommercial purposes\n     only;\n\n  b. if You include all or a substantial portion of the database\n     contents in a database in which You have Sui Generis Database\n     Rights, then the database in which You have Sui Generis Database\n     Rights (but not its individual contents) is Adapted Material; and\n\n  c. You must comply with the conditions in Section 3(a) if You Share\n     all or a substantial portion of the contents of the database.\n\nFor the avoidance of doubt, this Section 4 supplements and does not\nreplace Your obligations under this Public License where the Licensed\nRights include other Copyright and Similar Rights.\n\n\nSection 5 -- Disclaimer of Warranties and Limitation of Liability.\n\n  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE\n     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS\n     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF\n     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,\n     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,\n     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR\n     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,\n     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT\n     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT\n     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.\n\n  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE\n     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,\n     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,\n     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,\n     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR\n     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN\n     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR\n     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR\n     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.\n\n  c. The disclaimer of warranties and limitation of liability provided\n     above shall be interpreted in a manner that, to the extent\n     possible, most closely approximates an absolute disclaimer and\n     waiver of all liability.\n\n\nSection 6 -- Term and Termination.\n\n  a. This Public License applies for the term of the Copyright and\n     Similar Rights licensed here. However, if You fail to comply with\n     this Public License, then Your rights under this Public License\n     terminate automatically.\n\n  b. Where Your right to use the Licensed Material has terminated under\n     Section 6(a), it reinstates:\n\n       1. automatically as of the date the violation is cured, provided\n          it is cured within 30 days of Your discovery of the\n          violation; or\n\n       2. upon express reinstatement by the Licensor.\n\n     For the avoidance of doubt, this Section 6(b) does not affect any\n     right the Licensor may have to seek remedies for Your violations\n     of this Public License.\n\n  c. For the avoidance of doubt, the Licensor may also offer the\n     Licensed Material under separate terms or conditions or stop\n     distributing the Licensed Material at any time; however, doing so\n     will not terminate this Public License.\n\n  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public\n     License.\n\n\nSection 7 -- Other Terms and Conditions.\n\n  a. The Licensor shall not be bound by any additional or different\n     terms or conditions communicated by You unless expressly agreed.\n\n  b. Any arrangements, understandings, or agreements regarding the\n     Licensed Material not stated herein are separate from and\n     independent of the terms and conditions of this Public License.\n\n\nSection 8 -- Interpretation.\n\n  a. For the avoidance of doubt, this Public License does not, and\n     shall not be interpreted to, reduce, limit, restrict, or impose\n     conditions on any use of the Licensed Material that could lawfully\n     be made without permission under this Public License.\n\n  b. To the extent possible, if any provision of this Public License is\n     deemed unenforceable, it shall be automatically reformed to the\n     minimum extent necessary to make it enforceable. If the provision\n     cannot be reformed, it shall be severed from this Public License\n     without affecting the enforceability of the remaining terms and\n     conditions.\n\n  c. No term or condition of this Public License will be waived and no\n     failure to comply consented to unless expressly agreed to by the\n     Licensor.\n\n  d. Nothing in this Public License constitutes or may be interpreted\n     as a limitation upon, or waiver of, any privileges and immunities\n     that apply to the Licensor or You, including from the legal\n     processes of any jurisdiction or authority.\n\n=======================================================================\n\nCreative Commons is not a party to its public\nlicenses. Notwithstanding, Creative Commons may elect to apply one of\nits public licenses to material it publishes and in those instances\nwill be considered the \u201cLicensor.\u201d The text of the Creative Commons\npublic licenses is dedicated to the public domain under the CC0 Public\nDomain Dedication. Except for the limited purpose of indicating that\nmaterial is shared under a Creative Commons public license or as\notherwise permitted by the Creative Commons policies published at\ncreativecommons.org/policies, Creative Commons does not authorize the\nuse of the trademark \"Creative Commons\" or any other trademark or logo\nof Creative Commons without its prior written consent including,\nwithout limitation, in connection with any unauthorized modifications\nto any of its public licenses or any other arrangements,\nunderstandings, or agreements concerning use of licensed material. For\nthe avoidance of doubt, this paragraph does not form part of the\npublic licenses.\n\nCreative Commons may be contacted at creativecommons.org\n
"},{"location":"DeveloperGuide/TroubleShooting/","title":"Trouble shooting","text":""},{"location":"DeveloperGuide/TroubleShooting/#trouble-shooting","title":"Trouble shooting","text":"

This document describes the most common errors encountered when working with AWSIm or autoware.

Trouble Solution Massive output of Plugins errors git clone the AWSIM repository again error : RuntimeError: error not set, at C:\\ci\\ws\\src\\ros2\\rcl\\rcl\\src\\rcl\\node.c:262 Set up environment variables and config around ROS2 correctly. For example: - Environment variables - cyclonedds_config.xml $ ros2 topic list is not displayed - your machine ROS_DOMAIN_ID is different- ROS2 is not sourced Using AWSIM on Windows and Autoware on Ubuntu. $ ros2 topic list is not displayed. Allow the communication in Windows Firewall self-driving stops in the middle of the road. Check if your map data is correct (PointCloud, VectorMap, 3D fbx models) Connecting AWSIM and Autoware results in bad network Make ros local host-only. Include the following in the .bashrc (The password will be requested at terminal startup after OS startup.) export ROS_LOCALHOST_ONLY=1export RMW_IMPLEMENTATION=rmw_cyclonedds_cppif [ ! -e /tmp/cycloneDDS_configured ]; thensudo sysctl -w net.core.rmem_max=2147483647sudo ip link set lo multicast ontouch /tmp/cycloneDDS_configuredfi Lidar (colored pointcloud on RViz ) does not show. Reduce processing load by following command. This can only be applied to Autoware's awsim-stable branch. cd <path_to_your_autoware_folder>wget \"https://drive.google.com/uc?export=download&id=11mkwfg-OaXIp3Z5c3R58Pob3butKwE1Z\" -O patch.shbash patch.sh && rm patch.sh Error when starting AWSIM binary. segmentation fault (core dumped) - Check if yourNvidia drivers or Vulkan API are installed correctly - When building binary please pay attantion whether the Graphic Jobs option in Player Settings is disabled. It should be disabled since it may produce segmentation fault errors. Please check forum for more details. Initial pose does not match automatically. Set initial pose manually. Unity crashes and check the log for the cause of the error. Editor log file locationWindows : C:\\Users\\username\\AppData\\Local\\Unity\\Editor\\Editor.logLinux : ~/.config/unity3d/.Editor.log Player log file location Windows : C:\\Users\\username\\AppData\\LocalLow\\CompanyName\\ProductName\\output_log.txtLinux :~/.config/unity3d/CompanyName/ProductName/Player.logSee also : Unity Documentation - Log Files Safe mode dialog appears when starting UnityEditor. or error : No usable version of libssl was found 1. download libssl $ wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5.11_amd64.deb 2. install sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu5.11_amd64.deb (Windows) Unity Editor's error:Plugins: Failed to load 'Assets/RGLUnityPlugin/Plugins/Windows/x86_64/RobotecGPULidar.dll' because one or more of its dependencies could not be loaded. Install Microsoft Visual C++ Redistributable packages for Visual Studio 2015, 2017, 2019, and 2022 (X64 Architecture) (Windows) Built-binary or Unity Editor freeze when simulation started Update/Install latest NIC(NetworkInterfaceCard) drivers for your PC.Especially, if you can find latest drivers provided by chip vendors for the interfaces (not by Microsoft), we recommend vendors' drivers."},{"location":"GettingStarted/QuickStartDemo/","title":"Quick Start Demo","text":""},{"location":"GettingStarted/QuickStartDemo/#quick-start-demo","title":"Quick Start Demo","text":"

Below you can find instructions on how to setup the self-driving demo of AWSIM simulation controlled by Autoware. The instruction assumes using the Ubuntu OS.

"},{"location":"GettingStarted/QuickStartDemo/#demo-configuration","title":"Demo configuration","text":"

The simulation provided in the AWSIM demo is configured as follows:

AWSIM Demo Settings Vehicle Lexus RX 450h Environment Japan Tokyo Nishishinjuku Sensors GNSS, IMU, 3 x VLP16, Traffic Light Camera Traffic Randomized traffic ROS2 humble"},{"location":"GettingStarted/QuickStartDemo/#pc-specs","title":"PC specs","text":"

Please make sure that your machine meets the following requirements in order to run the simulation correctly:

Required PC Specs OS Ubuntu 22.04 CPU 6 cores and 12 thread or higher GPU RTX 2080Ti or higher Nvidia Driver (Ubuntu 22) >=545"},{"location":"GettingStarted/QuickStartDemo/#dds-configuration","title":"DDS configuration","text":"

In order to run AWSIM Labs with the best performance and without hogging the network, please follow the steps below.

Add the following lines to ~/.bashrc file:

if [ ! -e /tmp/cycloneDDS_configured ]; then\n    sudo sysctl -w net.core.rmem_max=2147483647\n    sudo sysctl -w net.ipv4.ipfrag_time=3\n    sudo sysctl -w net.ipv4.ipfrag_high_thresh=134217728     # (128 MB)\n    sudo ip link set lo multicast on\n    touch /tmp/cycloneDDS_configured\nfi\n

Every time you restart this machine, and open a new terminal, the above commands will be executed.

Until you restart the machine, they will not be executed again.

"},{"location":"GettingStarted/QuickStartDemo/#cyclonedds-configuration","title":"CycloneDDS configuration","text":"

Save the following as cyclonedds.xml in your home directory ~:

<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<CycloneDDS xmlns=\"https://cdds.io/config\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd\">\n    <Domain Id=\"any\">\n        <General>\n            <Interfaces>\n                <NetworkInterface name=\"lo\" priority=\"default\" multicast=\"default\" />\n            </Interfaces>\n            <AllowMulticast>default</AllowMulticast>\n            <MaxMessageSize>65500B</MaxMessageSize>\n        </General>\n        <Internal>\n            <SocketReceiveBufferSize min=\"10MB\"/>\n            <Watermarks>\n                <WhcHigh>500kB</WhcHigh>\n            </Watermarks>\n        </Internal>\n    </Domain>\n</CycloneDDS>\n

Make sure the following lines are added to the ~/.bashrc file:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp\nexport CYCLONEDDS_URI=/home/your_username/cyclonedds.xml\n

Replace your_username with your actual username.

Note

You should use the absolute path to the cyclonedds.xml file.

Warning

A system restart is required for these changes to work.

Warning

DO NOT set export ROS_LOCALHOST_ONLY=1. CycloneDDS configuration will be enough.

"},{"location":"GettingStarted/QuickStartDemo/#start-the-demo","title":"Start the demo","text":""},{"location":"GettingStarted/QuickStartDemo/#running-the-awsim-demo","title":"Running the AWSIM demo","text":"

To run the simulator, please follow the steps below.

  1. Install Nvidia GPU driver (Skip if already installed). 1. Add Nvidia driver to apt repository

    sudo add-apt-repository ppa:graphics-drivers/ppa\nsudo apt update\n
    2. Install the recommended version of the driver.
    sudo ubuntu-drivers autoinstall\n\n# or install a specific version (following was tested)\nsudo apt install nvidia-driver-550\n
    3. Reboot your machine to make the installed driver detected by the system.
    sudo reboot\n
    4. Open terminal and check if nvidia-smi command is available and outputs summary similar to the one presented below.
    $ nvidia-smi\n+-----------------------------------------------------------------------------------------+\n| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |\n|-----------------------------------------+------------------------+----------------------+\n| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |\n| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |\n|                                         |                        |               MIG M. |\n|=========================================+========================+======================|\n|   0  NVIDIA GeForce RTX 3080        Off |   00000000:2D:00.0  On |                  N/A |\n| 30%   40C    P8             35W /  320W |    5299MiB /  10240MiB |      7%      Default |\n|                                         |                        |                  N/A |\n+-----------------------------------------+------------------------+----------------------+\n...\n
  2. Install Vulkan Graphics Library (Skip if already installed). 1. Update the environment.

    sudo apt update\n
    2. Install the library.
    sudo apt install libvulkan1\n
  3. Download and Run AWSIM Demo binary.

    1. Download the latest release from the AWSIM Labs GitHub Release Page. AWSIM Labs GitHub Release Page

    2. Unzip the downloaded file.

    3. Make the file executable.

      Right click the `awsim_labs.x86_64` file and check the `Execute` checkbox\n\n  ![](Image_1.png)\n\n  or execute the command below.\n\n  ```\n  chmod +x <path to AWSIM folder>/awsim_labs.x86_64\n  ```\n

    4. Launch awsim_labs.x86_64.

    ./<path to AWSIM folder>/awsim_labs.x86_64\n
      It may take some time for the application to start the so please wait until image similar to the one presented below is visible in your application window.\n\n  ![](Image_0.png)\n
"},{"location":"GettingStarted/QuickStartDemo/#launching-autoware","title":"Launching Autoware","text":"

In order to configure and run the Autoware software with the AWSIM demo, please:

  1. Download map files (pcd, osm) and unzip them.

    Download Map files (pcd, osm)

  2. Clone Autoware and move to the directory.

    git clone https://github.com/autowarefoundation/autoware.git\ncd autoware\n
  3. Switch branch to main.
    git checkout main\n
  4. Configure the environment. (Skip if Autoware environment has been configured before)
    ./setup-dev-env.sh\n
  5. Create the src directory and clone external dependent repositories into it.
    mkdir src\nvcs import src < autoware.repos\n
  6. Install dependent ROS packages.
    source /opt/ros/humble/setup.bash\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
  7. Build the workspace.
    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_EXPORT_COMPILE_COMMANDS=1\n
  8. Launch Autoware.
    source install/setup.bash\nros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=<absolute path of map folder>\n\n# Use the absolute path for the map folder, don't use the ~ operator.\n\n# Example:\nros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=/home/your_username/autoware_map/nishishinjuku_autoware_map\n
"},{"location":"GettingStarted/QuickStartDemo/#lets-run-the-self-driving-simulation","title":"Let's run the self-Driving simulation","text":"
  1. Launch AWSIM and Autoware according to the steps described earlier in this document.

  2. The Autoware will automatically set its pose estimation as presented below.

  3. Set the navigation goal for the vehicle.

  4. Optionally, you can define an intermediate point through which the vehicle will travel on its way to the destination. The generated path can be seen on the image below.

  5. Enable self-driving.

To make the vehicle start navigating please engage its operation using the command below.

cd autoware\nsource install/setup.bash\nros2 topic pub /autoware/engage autoware_auto_vehicle_msgs/msg/Engage '{engage: True}' -1\n

The self-driving simulation demo has been successfully launched!

"},{"location":"GettingStarted/QuickStartDemo/#troubleshooting","title":"Troubleshooting","text":"

In case of any problems with running the sample AWSIM binary with Autoware, start with checking our Troubleshooting page with the most common problems.

"},{"location":"GettingStarted/QuickStartDemo/#appendix","title":"Appendix","text":""},{"location":"GettingStarted/SetupUnityProject/","title":"Setup Unity Project","text":""},{"location":"GettingStarted/SetupUnityProject/#setup-unity-project","title":"Setup Unity Project","text":"

Info

It is advised to checkout the Quick Start Demo tutorial before reading this section.

This page is a tutorial for setting up a AWSIM Unity project.

"},{"location":"GettingStarted/SetupUnityProject/#environment-preparation","title":"Environment preparation","text":""},{"location":"GettingStarted/SetupUnityProject/#system-setup","title":"System setup","text":"Ubuntu 22Windows
  1. Make sure your machine meets the required hardware specifications. - NOTE: PC requirements may vary depending on simulation contents which may change as the simulator develops
  2. Prepare a desktop PC with Ubuntu 22.04 installed.
  3. Install Nvidia drivers and Vulkan Graphics API.
  4. Install git.
  5. Set the ROS 2 middleware and the localhost only mode in ~/.profile (or, in ~/.bash_profile or ~/bash_login if either of those exists) file:

    export ROS_LOCALHOST_ONLY=1\nexport RMW_IMPLEMENTATION=rmw_cyclonedds_cpp\n

    Warning

    A system restart is required for these changes to work.

  6. Set the system optimizations by adding this code to the very bottom of your ~/.bashrc file:

    if [ ! -e /tmp/cycloneDDS_configured ]; then\n    sudo sysctl -w net.core.rmem_max=2147483647\n    sudo ip link set lo multicast on\n    touch /tmp/cycloneDDS_configured\nfi\n

    Info

    As a result, each time you run the terminal (bash prompt), your OS will be configured for the best ROS 2 performance. Make sure you open your terminal at least one before running any instance of AWSIM (or Editor running the AWSIM).

  1. Make sure your machine meets the required hardware specifications. - NOTE: PC requirements may vary depending on simulation contents which may change as the simulator develops
  2. Prepare a desktop PC with Windows 10 or 11 (64 bit) installed.
  3. Install git.
  4. Install Microsoft Visual C++ Redistributable packages for Visual Studio 2015, 2017, 2019, and 2022 (X64 Architecture)
"},{"location":"GettingStarted/SetupUnityProject/#ros-2","title":"ROS 2","text":"

AWSIM comes with a standalone flavor of Ros2ForUnity. This means that, to avoid internal conflicts between different ROS 2 versions, you shouldn't run the Editor or AWSIM binary with ROS 2 sourced.

Warning

Do not run the AWSIM, Unity Hub, or the Editor with ROS 2 sourced.

Ubuntu 22Windows "},{"location":"GettingStarted/SetupUnityProject/#unity-installation","title":"Unity installation","text":"

Info

AWSIM's Unity version is currently 2021.1.7f1

Follow the steps below to install Unity on your machine:

  1. Install UnityHub to manage Unity projects. Please go to Unity download page and download latest UnityHub.AppImage.
  2. Install Unity 2021.1.7f1 via UnityHub. - Open new terminal, navigate to directory where UnityHub.AppImage is download and execute the following command:
    ./UnityHub.AppImage\n
    - To install Unity Editor please proceed as shown on the images below - At this point, your Unity installation process should have started.
      === \"Ubuntu 22\"\n  - *NOTE: If the installation process has not started after clicking the green button (image above), please copy the hyperlink (by rightclicking the button and selecting `Copy link address`) and add it as a argument for Unity Hub app. An example command:\n  ```\n  ./UnityHub.AppImage unityhub://2021.1.7f1/d91830b65d9b\n  ```\n

    - After successful installation the version will be available under the Installs tab in Unity Hub.

"},{"location":"GettingStarted/SetupUnityProject/#open-awsim-project","title":"Open AWSIM project","text":"

To open the Unity AWSIM project in Unity Editor:

Using Unity HubUsing Terminal
  1. Make sure you have the AWSIM repository cloned and ROS 2 is not sourced.

    git clone git@github.com:autowarefoundation/AWSIM.git\n
  2. Launch UnityHub.

    ./UnityHub.AppImage\n

    Info

    If you are launching the Unity Hub from the Ubuntu applications menu (without the terminal), make sure that system optimizations are set. To be sure, run the terminal at least once before running the Unity Hub. This will apply the OS settings.

  3. Open the project in UnityHub - Click the Open button

    • Navigate the directory where the AWSIM repository was cloned to
    • The project should be added to Projects tab in Unity Hub. To launch the project in Unity Editor simply click the AWSIM item
    • The project is now ready to use
  1. Enter the AWSIM directory (make sure ROS 2 is not sourced).

    cd AWSIM\n
  2. If your Unity Editor is in default location, run the project using the editor command.

    ~/Unity/Hub/Editor/2021.1.7f1/Editor/Unity -projectPath .\n

    Info

    If your Unity Editor is installed in different location, please adjust the path accordingly.

Warning

If you get the safe mode dialog when starting UnityEditor, you may need to install openssl.

  1. download libssl $ wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5.13_amd64.deb
  2. install sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu5.13_amd64.deb
"},{"location":"GettingStarted/SetupUnityProject/#import-external-packages","title":"Import external packages","text":"

To properly run and use AWSIM project in Unity it is required to download map package which is not included in the repository.

  1. Download and import Nishishinjuku_URP_v0.1.0.unitypackage

    Download Map Package

  2. In Unity Editor, from the menu bar at the top, select Assets -> Import Package -> Custom Package... and navigate the Nishishinjuku_urp.unitypackage file.

  3. Nishishinjuku package has been successfully imported under Assets/AWSIM/Externals/directory.

Info

The Externals directory is added to the .gitignore because the map has a large file size and should not be directly uploaded to the repository.

"},{"location":"GettingStarted/SetupUnityProject/#import-graphy-asset","title":"Import Graphy Asset","text":"

Import Graphy by following these instructions: Graphy Asset Setup

"},{"location":"GettingStarted/SetupUnityProject/#run-the-demo-in-editor","title":"Run the demo in Editor","text":"

The following steps describe how to run the demo in Unity Editor:

  1. Open the AutowareSimulation.unity scene placed under Assets/AWSIM/Scenes/Main directory
  2. Run the simulation by clicking Play button placed at the top section of Editor.
"},{"location":"GettingStarted/UsingOpenSCENARIO/","title":"Using OpenSCENARIO","text":""},{"location":"GettingStarted/UsingOpenSCENARIO/#using-openscenario","title":"Using OpenSCENARIO","text":"

Warning

Running AWSIM with scenario_simulator_v2 is still a prototype, so stable running is not guaranteed.

Below you can find instructions on how to setup the OpenSCENARIO execution using scenario_simulator_v2 with AWSIM as a simulator The instruction assumes using the Ubuntu OS.

"},{"location":"GettingStarted/UsingOpenSCENARIO/#prerequisites","title":"Prerequisites","text":"

Follow Setup Unity Project tutorial

"},{"location":"GettingStarted/UsingOpenSCENARIO/#build-autoware-with-scenario_simulator_v2","title":"Build Autoware with scenario_simulator_v2","text":"

In order to configure the Autoware software with the AWSIM demo, please:

  1. Clone RobotecAI's Autoware and move to the directory.
    git clone git@github.com:RobotecAI/autoware-1.git\ncd autoware\n
  2. Check out to the awsim-ss2-stable branch
    git checkout awsim-ss2-stable\n
  3. Configure the environment. (Skip if Autoware environment has been configured before)
    ./setup-dev-env.sh\n
  4. Create the src directory and clone external dependent repositories into it.
    mkdir src\nvcs import src < autoware.repos\nvcs import src < simulator.repos\n
  5. Download shinjuku_map.zip archive

  6. Unzip it to src/simulator directory

    unzip <Download directory>/shinjuku_map.zip -d src/simulator\n
  7. Install dependent ROS packages.
    source /opt/ros/humble/setup.bash\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
  8. Build the workspace.
    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=\"-w\"\n
"},{"location":"GettingStarted/UsingOpenSCENARIO/#running-the-demo","title":"Running the demo","text":"
  1. Download AWSIM_v1.2.0_ss2.zip & Run archive

  2. Launch scenario_test_runner.

    source install/setup.bash\nros2 launch scenario_test_runner scenario_test_runner.launch.py                        \\\narchitecture_type:=awf/universe  record:=false                                         \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim.yaml'          \\\nsensor_model:=awsim_sensor_kit  vehicle_model:=sample_vehicle                          \\\nlaunch_simple_sensor_simulator:=false autoware_launch_file:=\"e2e_simulator.launch.xml\" \\\ninitialize_duration:=260 port:=8080\n
"},{"location":"GettingStarted/UsingOpenSCENARIO/#troubleshooting","title":"Troubleshooting","text":"

In case of problems, make sure that the regular demo work well with the Autoware built above. Follow the troubleshooting page there if necessary.

"},{"location":"GettingStarted/UsingOpenSCENARIO/#appendix","title":"Appendix","text":""},{"location":"Introduction/AWSIM/","title":"AWSIM Labs","text":""},{"location":"Introduction/AWSIM/#awsim-labs","title":"AWSIM Labs","text":"

AWSIM Labs is a fork of TIER IV/AWSIM, an open-source simulator made with Unity for autonomous driving research and development. It is developed for self-driving software like Autoware. This simulator aims to bridge the gap between the virtual and real worlds, enabling users to train and evaluate their autonomous systems in a safe and controlled environment before deploying them on real vehicles. It provides a realistic virtual environment for training, testing, and evaluating various aspects of autonomous driving systems.

AWSIM simulates a variety of real-world scenarios, with accurate physics and sensor models. It offers a wide range of sensors, such as: Cameras, GNSS, IMU and LiDARs, allowing developers to simulate their autonomous vehicle's interactions with the environment accurately. The simulator also models dynamic objects, such as pedestrians, other vehicles, and traffic lights, making it possible to study interactions and decision-making in complex traffic scenarios. This enables the testing and evaluation of perception, planning, and control algorithms under different sensor configurations and scenarios.

AWSIM supports a flexible and modular architecture, making it easy to customize and extend its capabilities. Users can modify the current or add a new environment with their own assets and traffic rules to create custom scenarios to suit their specific research needs. This allows for the development and testing of advanced algorithms in diverse driving conditions.

Because AWSIM was developed mainly to work with Autoware, it supports:

Prerequisites

You can read more about the prerequisites and running AWSIM here.

Connection with Autoware

Introduction about how the connection between AWSIM and Autoware works can be read here.

"},{"location":"Introduction/AWSIM/#why-was-awsim-developed","title":"Why was AWSIM developed?","text":"

The main objectives of AWSIM are to facilitate research and development in autonomous driving, enable benchmarking of algorithms and systems, and foster collaboration and knowledge exchange within the autonomous driving community. By providing a realistic and accessible platform, AWSIM aims to accelerate the progress and innovation in the field of autonomous driving.

"},{"location":"Introduction/AWSIM/#architecture","title":"Architecture","text":"

To describe the architecture of AWSIM, first of all, it is necessary to mention the Scene. It contains all the objects occurring in the simulation of a specific scenario and their configurations. The default AWSIM scene that is developed to work with Autoware is called AutowareSimulation.

In the scene we can distinguish basics components such like MainCamera, ClockPublisher, EventSystem and Canvas. A detailed description of the scene and its components can be found here.

Besides the elements mentioned above, the scene contains two more, very important and complex components: Environment and EgoVehicle - described below.

"},{"location":"Introduction/AWSIM/#environment","title":"Environment","text":"

Environment is a component that contains all Visual Elements that simulate the environment in the scene and those that provide control over them. It also contains two components Directional Light and Volume, which ensure suitable lighting for Visual Elements and simulate weather conditions. A detailed description of these components can be found here.

In addition to Visual Elements such as buildings or greenery, it contains the entire architecture responsible for traffic. The traffic involves NPCVehicles that are spawned in the simulation by TrafficSimulator - using traffic components. A quick overview of the traffic components is provided below, however, you can read their detailed description here.

NPCPedestrians are also Environment components, but they are not controlled by TrafficSimulator. They have added scripts that control their movement - you can read more details here.

"},{"location":"Introduction/AWSIM/#traffic-components","title":"Traffic Components","text":"

TrafficLanes and StopLines are elements loaded into Environment from Lanelet2. TrafficLanes have defined cross-references in such a way as to create routes along the traffic lanes. In addition, each TrafficLane present at the intersection has specific conditions for yielding priority. TrafficSimulator uses TrafficLanes to spawn NPCVehicles and ensure their movement along these lanes. If some TrafficLanes ends just before the intersection, then it has a reference to StopLine. Each StopLine at the intersection with TrafficLights has reference to the nearest TrafficLight. TrafficLights belong to one of the visual element groups and provide an interface to control visual elements that simulate traffic light sources (bulbs). A single TrafficIntersection is responsible for controlling all TrafficLights at one intersection. Detailed description of mentioned components is in this section.

"},{"location":"Introduction/AWSIM/#egovehicle","title":"EgoVehicle","text":"

EgoVehicle is a component responsible for simulating an autonomous vehicle moving around the scene. It includes:

A detailed description of EgoVehicle and its components mentioned above can be found here. The sensor placement on EgoVehicle used in the default scene is described here. Details about each of the individual sensors are available in the following sections: Pose, GNSS, LiDAR, IMU, Camera, Vehicle Status.

"},{"location":"Introduction/AWSIM/#fixedupdate-limitation","title":"FixedUpdate Limitation","text":"

In AWSIM, the sensors' publishing methods are triggered from the FixedUpdate function and the output frequency is controlled by:

time += Time.deltaTime;\nvar interval = 1.0f / OutputHz;\ninterval -= 0.00001f; // Allow for accuracy errors.\nif (time < interval)\n    return;\ntimer = 0;\n

Since this code runs within the FixedUpdate method, it's essential to note that Time.deltaTime is equal to Fixed Timestep, as stated in the Unity Time.deltaTime documentation. Consequently, with each invocation of FixedUpdate, the time variable in the sensor script will increment by a constant value of Fixed Timestep, independent of the actual passage of real-time. Additionally, as outlined in the Unity documentation, the FixedUpdate method might execute multiple times before the Update method is called, resulting in extremely small time intervals between successive FixedUpdate calls. The diagram below illustrates the mechanism of invoking the FixedUpdate event function.\"

During each frame (game tick) following actions are performed:

As a consequence, this engine feature may result in unexpected behavior when FPS (Frames Per Second) are unstable or under certain combinations of FPS, Fixed Timestep, and sensor OutputHz

In case of low frame rates, it is advisable to reduce the Time Scale of the simulation. The Time Scale value impacts simulation time, which refers to the time that is simulated within the model and might or might not progress at the same rate as real-time. Therefore, by reducing the time scale, the progression of simulation time slows down, allowing the simulation more time to perform its tasks.

"},{"location":"Introduction/Autoware/","title":"Autoware","text":""},{"location":"Introduction/Autoware/#autoware","title":"Autoware","text":"

Autoware is an open-source software platform specifically designed for autonomous driving applications. It was created to provide a comprehensive framework for developing and testing autonomous vehicle systems. Autoware offers a collection of modules and libraries that assist in various tasks related to perception, planning, and control, making it easier for researchers and developers to build autonomous driving systems.

The primary purpose of Autoware is to enable the development of self-driving technologies by providing a robust and flexible platform. It aims to accelerate the research and deployment of autonomous vehicles by offering a ready-to-use software stack. Autoware focuses on urban driving scenarios and supports various sensors such as LiDAR, Radars, and Cameras, allowing for perception of the vehicle's surroundings.

"},{"location":"Introduction/Autoware/#why-use-awsim-with-autoware","title":"Why use AWSIM with Autoware?","text":"

Autoware can be used with a AWSIM for several reasons. Firstly, simulators like AWSIM provide a cost-effective and safe environment for testing and validating autonomous driving algorithms before deploying them on real vehicles. Autoware's integration with a simulator allows developers to evaluate and fine-tune their algorithms without the risk of real-world accidents or damage.

Additionally, simulators enable developers to recreate complex driving scenarios, including difficult conditions or rare events, which may be difficult to replicate in real-world testing with such high fidelity. Autoware's compatibility with a AWSIM allows seamless integration between the software and the simulated vehicle, enabling comprehensive testing and validation of autonomous driving capabilities. By utilizing a simulator, Autoware can be extensively tested under various scenarios to ensure its robustness and reliability.

Connection with Autoware

Introduction about how the connection between AWSIM and Autoware works can be read here.

"},{"location":"Introduction/Autoware/#architecture","title":"Architecture","text":"

In terms of architecture, Autoware follows a modular approach. It consists of multiple independent modules that communicate with each other through a ROS2. This modular structure allowing users to select and combine different modules based on their specific needs and requirements. The software stack comprises multiple components, including perception, localization, planning, and control modules. Here's a brief overview of each module:

"},{"location":"Introduction/CombinationWithAutoware/","title":"CombinationWithAutoware","text":"

Autoware is a powerful open-source software platform for autonomous driving. Its modular architecture, including perception, localization, planning, and control modules, provides a comprehensive framework for developing self-driving vehicles. Autoware combined with AWSIM simulator provides safe testing, validation, and optimization of autonomous driving algorithms in diverse scenarios.

Run with Autoware

If you would like to know how to run AWSIM with Autoware, we encourage you to read this section.

"},{"location":"Introduction/CombinationWithAutoware/#features","title":"Features","text":"

The combination of Autoware and AWSIM provides the opportunity to check the correctness of the vehicle's behavior in various traffic situations. Below are presented some typical features provided by this combination. Moreover, examples of detecting several bad behaviors are included.

"},{"location":"Introduction/CombinationWithAutoware/#engagement","title":"Engagement","text":" "},{"location":"Introduction/CombinationWithAutoware/#traffic-light-recognition","title":"Traffic light recognition","text":" "},{"location":"Introduction/CombinationWithAutoware/#interaction-with-vehicles","title":"Interaction with vehicles","text":" "},{"location":"Introduction/CombinationWithAutoware/#interaction-with-pedestrians","title":"Interaction with pedestrians","text":" "},{"location":"Introduction/CombinationWithAutoware/#detecting-bad-behaviors","title":"Detecting bad behaviors","text":" "},{"location":"Introduction/CombinationWithAutoware/#combination-architecture","title":"Combination Architecture","text":"

The combination of AWSIM with Autoware is possible thanks to Vehicle Interface and Sensing modules of Autoware architecture. The component responsible for ensuring connection with these modules from the AWSIM side is EgoVehicle. It has been adapted to the Autoware architecture and provides ROS2 topic-based communication. However, the other essential component is ClockPublisher, which provides simulation time for Autoware - also published on the topic - more details here.

EgoVehicle component provides the publication of the current vehicle status through a script working within Vehicle Status. It provides real-time information such as: current speed, current steering of the wheels or current states of lights - these are outputs from AWSIM.

On the other hand, Vehicle Ros Input is responsible for providing the values of the outputs from Autoware. It subscribes to the current commands related to the given acceleration, gearbox gear or control of the specified lights.

Execution of the received commands is possible thanks to Vehicle, which ensures the setting of appropriate accelerations on the **Wheel and controlling the visual elements of the vehicle.

The remaining data delivered from AWSIM to Autoware are sensors data, which provides information about the current state of the surrounding environment and those necessary to accurately estimate EgoVehicle position.

More about EgoVehicle and its scripts is described in this section.

"},{"location":"Introduction/CombinationWithAutoware/#sequence-diagram","title":"Sequence diagram","text":"

Below is a simplified sequential diagram of information exchange in connection between AWSIM and Autoware. As you can see, the first essential information published from AWSIM is Clock - the simulation time. Next, EgoVehicle is spawned and first sensors data are published, which are used in the process of automatic position initialization on Autoware side. At the same time, the simulation on AWSIM side is updated.

Next in the diagram is the main information update loop in which:

The order of information exchange presented in the diagram is a simplification. The exchange of information takes place through the publish-subscribe model and each data is sent with a predefined frequency.

"},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/","title":"Combination with Autoware and Scenario simulator v2","text":""},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/#combination-with-autoware-and-scenario-simulator-v2","title":"Combination with Autoware and Scenario simulator v2","text":"

Scenario Simulator v2 (SS2) is a scenario testing framework specifically developed for Autoware, an open-source self-driving software platform. It serves as a tool for Autoware developers to conveniently create and execute scenarios across different simulators.

The primary goal of SS2 is to provide Autoware developers with an efficient means of writing scenarios once and then executing them in multiple simulators. By offering support for different simulators and scenario description formats, the framework ensures flexibility and compatibility.

The default scenario format in this framework is TIER IV Scenario Format version 2.0. The scenario defined on this format is converted by scenario_test_runner to openSCENARIO format, which is then interpreted by openscenario_interpreter. Based on this interpretation, traffic_simulator simulates traffic flow in an urban area. Each NPC has a behavior tree and executes commands from the scenario.

The framework uses ZeroMQ Inter-Process communication for seamless interaction between the simulator and the traffic_simulator. To ensure synchronous operation of the simulators, SS2 utilizes the Request/Reply sockets provided by ZeroMQ and exchanges binarized data through Protocol Buffers. This enables the simulators to run in a synchronized manner, enhancing the accuracy and reliability of scenario testing.

QuickStart Scenario simulator v2 with Autoware

If you would like to see how SS2 works with Autoware using default build-in simulator - simple_sensor_simulator (without running AWSIM) - we encourage you to read this tutorial.

"},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/#combination-architecture","title":"Combination Architecture","text":"

AWSIM scene architecture used in combination with SS2 changes considerably compared to the default scene. Here traffic_simulator from SS2 replaces TrafficSimulator implementation in AWSIM - for this reason it and its StopLines, TrafficLanes and TrafficIntersection components are removed. Also, NPCPedestrian and NPCVehicles are not added as aggregators of NPCs in Environment.

Instead, their counterparts are added in ScenarioSimulatorConnector object that is responsible for spawning Entities of the scenario. Entity can be: Pedestrian, Vehicle, MiscObject and Ego. EgoEntity is the equivalent of EgoVehicle - which is also removed from the default scene. However, it has the same components - it still communicates with Autoware as described here. So it can be considered that EgoVehicle has not changed and NPCPedestrians and NPCVehicles are now controlled directly by the SS2.

A detailed description of the SS2 architecture is available here. A description of the communication via ROS2 between SS2 and Autoware can be found here.

"},{"location":"Introduction/CombinationWithAutowareAndScenarioSimulator/#sequence-diagram","title":"Sequence diagram","text":"

In the sequence diagram, the part responsible for AWSIM communication with Autoware also remained unchanged. The description available here is the valid description of the reference shown in the diagram below.

Communication between SS2 and AWSIM takes place via Request-Response messages, and is as follows:

  1. Launch - Autoware is started and initialized.
  2. Initialize - the environment in AWSIM is initialized, basic parameters are set.
  3. opt Ego spawn - optional, EgoEntity (with sensors) is spawned in the configuration defined in the scenario.
  4. opt NPC spawn loop - optional, all Entities (NPCs) defined in the scenario are spawned, the scenario may contain any number of each Entity type, it may not contain them at all or it may also be any combination of the available ones.
  5. update loop - this is the main loop where scenario commands are executed, first EgoEntity is updated - SS2 gets its status, and then every other Entity is updated - the status of each NPCs is set according to the scenario. Next, the simulation frame is updated - here the communication between Autoware and AWSIM takes place. The last step of the loop is to update the traffic light state.
  6. despawn loop - after the end of the scenario, all Entities spawned on the scene are despawned (including EgoEnity)
  7. Terminate - Autoware is terminated.

Documentation of the commands used in the sequence is available here.

"},{"location":"ProjectGuide/Directory/","title":"Directory","text":""},{"location":"ProjectGuide/Directory/#directory","title":"Directory","text":"

AWSIM has the following directory structure. Mostly they are grouped by file type.

AWSIM       //  root directory.\n \u2502\n \u2502\n \u251c\u2500Assets                           // Unity project Assets directory.\n \u2502  \u2502                               // Place external libraries\n \u2502  \u2502                               // under this directory.\n \u2502  \u2502                               // (e.g. RGLUnityPlugin, ROS2ForUnity, etc..)\n \u2502  \u2502\n \u2502  \u2502\n \u2502  \u251c\u2500AWSIM                         // Includes assets directly related to AWSIM\n \u2502  |                               // (Scripts, Prefabs etc.)\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Externals                  // Place for large files or\n \u2502  \u2502  |                            // external project dependencies\n \u2502  \u2502  |                            // (e.g. Ninshinjuku map asset).\n \u2502  \u2502  \u2502                            // The directory is added to `.gitignore`\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500HDRPDefaultResources       // Unity HDRP default assets.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Materials                  // Materials used commonly in Project.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Models                     // 3D models\n \u2502  \u2502  \u2502  \u2502                         // Textures and materials for 3D models\n \u2502  \u2502  \u2502  \u2502                         // are also included.\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2514\u2500<3D Model>              // Directory of each 3D model.\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u251c\u2500Materials            // Materials used in 3D model.\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u2502\n \u2502  \u2502  \u2502     \u2514\u2500Textures             // Textures used in 3D model.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Prefabs                    // Prefabs not dependent on a specific scene.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u251c\u2500Scenes                     // Scenes\n \u2502  \u2502  \u2502  \u2502                         // Includes scene-specific scripts, etc.\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u251c\u2500Main                    // Scenes used in the simulation.\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502  \u2514\u2500Samples                 // Sample Scenes showcasing components.\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2502\n \u2502  \u2502  \u2514\u2500Scripts                    // C# scripts.\n \u2502  \u2502\n \u2502  \u2502\n \u2502  \u251c\u2500RGLUnityPlugin        // Robotec GPU LiDAR external Library.\n \u2502  \u2502                       // see: https://github.com/RobotecAI/RobotecGPULidar\n \u2502  \u2502\n \u2502  \u2502\n \u2502  \u2514\u2500Ros2ForUnity          // ROS2 communication external Library.\n \u2502                          // see: https://github.com/RobotecAI/ros2-for-unity\n \u2502\n \u251c\u2500Packages         // Unity automatically generated directories.\n \u251c\u2500ProjectSettings  //\n \u251c\u2500UserSettings     //\n \u2502\n \u2502\n \u2514\u2500docs             // AWSIM documentation. Generated using mkdocs.\n                    // see: https://www.mkdocs.org/\n
"},{"location":"ProjectGuide/ExternalLibraries/","title":"External Libraries","text":""},{"location":"ProjectGuide/ExternalLibraries/#external-libraries","title":"External Libraries","text":"

List of external libraries used in AWSIM.

Library Usage URL ros2-for-unity ROS2 communication https://github.com/RobotecAI/ros2-for-unity Robtoec-GPU-LiDAR LiDAR simulation https://github.com/RobotecAI/RobotecGPULidar"},{"location":"ProjectGuide/GitBranch/","title":"Git Branch","text":""},{"location":"ProjectGuide/GitBranch/#git-branch","title":"Git branch","text":"

The document presents the rules of branching adopted in the AWSIM development process.

"},{"location":"ProjectGuide/GitBranch/#branches","title":"Branches","text":"branch explain main Stable branch. Contains all the latest releases. feature/*** Feature implementation branch created from main. After implementation, it is merged into main. gh-pages Documentation hosted on GitHub pages."},{"location":"ProjectGuide/GitBranch/#branch-flow","title":"Branch flow","text":"
  1. Create feature/*** branch from main.
  2. Implement in feature/*** branch.
  3. Create a PR from the feature/*** branch to main branch. Merge after review.
"},{"location":"ProjectGuide/HotkeyList/","title":"Hotkey List","text":""},{"location":"ProjectGuide/HotkeyList/#hotkey-list","title":"Hotkey List","text":""},{"location":"ProjectGuide/HotkeyList/#vehiclekeyboardinputcs","title":"VehicleKeyboardInput.cs","text":"Key Feature D Change drive gear. P parking gear. R Reverse gear. N Neutral gear. 1 Left turn signal. 2 Right turn signal. 3 Hazard. 4 Turn signal off. Up arrow Accelerate. Left arrow Steering (Left). Right arrow Steering (Right). Down arrow Breaking.

W,A,S,D keys can also be used to control the vehicle, similar to the arrow keys.

"},{"location":"ProjectGuide/HotkeyList/#followcameracs","title":"FollowCamera.cs","text":"Key Feature C Camera rotation on/off toggle. Mouse drag Rotate Camera angle. Mouse wheel Zoom in/out of camera."},{"location":"ProjectGuide/HotkeyList/#hotkeyhandlercs","title":"HotkeyHandler.cs","text":"Key Feature Esc Toggle main menu Ctrl + R Reset ego vehicle"},{"location":"ProjectGuide/Scenes/","title":"Scenes","text":"

In the AWSIM Unity project there is one main scene (AutowareSimulation) and several additional ones that can be helpful during development. This section describes the purpose of each scene in the project.

"},{"location":"ProjectGuide/Scenes/#autowaresimulation","title":"AutowareSimulation","text":"

The AutowareSimulation scene is the main scene that is designed to run the AWSIM simulation together with Autoware. It allows for effortless operation, just run this scene, run Autoware with the correct map file and everything should work right out of the box.

"},{"location":"ProjectGuide/Scenes/#pointcloudmapping","title":"PointCloudMapping","text":"

The PointCloudMapping is a scene that is designed to create a point cloud using the Unity world. Using the RGLUnityPlugin and prefab Environment - on which there are models with Meshes - we are able to obtain a *.pcd file of the simulated world.

"},{"location":"ProjectGuide/Scenes/#sensorconfig","title":"SensorConfig","text":"

Scene SensorConfig was developed to perform a quick test of sensors added to the EgoVehicle prefab. Replace the Lexus prefab with a vehicle prefab you developed and check whether all data that should be published is present, whether it is on the appropriate topics and whether the data is correct.

"},{"location":"ProjectGuide/Scenes/#npcvehiclesample","title":"NPCVehicleSample","text":"

The NPCVehicleSample was developed to conduct a quick test of the developed vehicle. Replace the taxi prefab with a vehicle prefab you developed (EgoVehicle or NPCVehicle) and check whether the basic things are configured correctly. The description of how to develop your own vehicle and add it to the project is in this section.

"},{"location":"ProjectGuide/Scenes/#npcpedestriansample","title":"NPCPedestrianSample","text":"

The NPCPedestrianSample was developed to conduct a quick test of the developed pedestrian. Replace the NPC prefab in NPC Pedestrian Test script with a prefab you developed and check whether the basic things are configured correctly.

"},{"location":"ProjectGuide/Scenes/#trafficintersectionsample","title":"TrafficIntersectionSample","text":"

The TrafficIntersectionSample was developed to conduct a quick test of the developed traffic intersection. Replace the intersection configuration with your own and check whether it works correctly. You can add additional groups of lights and create much larger, more complex sequences. A description of how to configure your own traffic intersection is in this section.

"},{"location":"ProjectGuide/Scenes/#trafficlightsample","title":"TrafficLightSample","text":"

The TrafficLightSample was developed to conduct a quick test of a developed traffic lights model in cooperation with the script controlling it. Replace the lights and configuration with your own and check whether it works correctly.

"},{"location":"ProjectGuide/Scenes/#randomtrafficyielding","title":"RandomTrafficYielding","text":"

The RandomTrafficYielding was developed to conduct a tests of a developed yielding rules at the single intersection.

"},{"location":"ProjectGuide/Scenes/#randomtrafficyieldingbirdeye","title":"RandomTrafficYieldingBirdEye","text":"

The RandomTrafficYielding was developed to conduct a tests of a developed yielding rules with multiple vehicles moving around the entire environment.

"},{"location":"ProjectGuide/Scenes/#rgl-test-scenes","title":"RGL test scenes","text":"

The scenes described below are used for tests related to the external library RGLUnityPlugin (RGL) - you can read more about it in this section.

"},{"location":"ProjectGuide/Scenes/#lidarscenedevelop","title":"LidarSceneDevelop","text":"

The scene LidarSceneDevelop can be used as a complete, minimalistic example of how to setup RGL. It contains RGLSceneManager component, four lidars, and an environment composed of floor and walls.

"},{"location":"ProjectGuide/Scenes/#lidarskinnedstress","title":"LidarSkinnedStress","text":"

The scene LidarSkinnedStress can be used to test the performance of RGL. E.g. how performance is affected when using Regular Meshes compared to Skinned Meshes. The scene contains a large number of animated models that require meshes to be updated every frame, thus requiring more resources (CPU and data exchange with GPU).

"},{"location":"ProjectGuide/Scenes/#lidardisablingtest","title":"LidarDisablingTest","text":"

The scene LidarDisablingTest can be used to test RGL performance with similar objects but with different configurations. It allows you to check whether RGL works correctly when various components that can be sources of Meshes are disabled (Colliders, Regular Meshes, Skinned Meshes, ...).

"},{"location":"ProjectGuide/Scenes/#lidarinstancesegmentationdemo","title":"LidarInstanceSegmentationDemo","text":"

The LidarInstanceSegmentationDemo is a demo scene for instance segmentation feature. It contains a set of GameObjects with ID assigned and sample lidar that publishes output to the ROS2 topic. The GameObjects are grouped to present different methods to assign IDs.

To run demo scene:

  1. Open scene: Assets/AWSIM/Scenes/Samples/LidarInstanceSegmentationDemo.unity
  2. Run simulation
  3. Open rviz2
  4. Setup rviz2 as follows: - Fixed frame: world, - PointCloud2 topic: lidar/instance_id, - Topic QoS as in the screen above. - Channel name: enitity_id, - To better visualization disable Autocompute intensity and set min to 0 and max to 50.
"}]} \ No newline at end of file diff --git a/pr-59/sitemap.xml.gz b/pr-59/sitemap.xml.gz index 5a5c8635e..ab2ec6441 100644 Binary files a/pr-59/sitemap.xml.gz and b/pr-59/sitemap.xml.gz differ