diff --git a/html/acoustics_camera.html b/html/acoustics_camera.html index 638b5c0f..dd26ac0d 100644 --- a/html/acoustics_camera.html +++ b/html/acoustics_camera.html @@ -90,17 +90,17 @@
This camera simulates a microphone array, or, in other words, a directional microphone. Its readings are assembled into a spherical pattern, consisting of one floating-point measurement for each direction emerging from the microphone center. It is assumed that the microphone array is mounted on the robot and it takes readings as the robot moves around.
For visualization purposes, the microphone measurements are converted to an acoustic "image". Hence, a virtual camera is created centered at the microphone and with a certain pose that is ideally facing the direction where all or most of the interesting sounds are coming from. The reading at a pixel of that camera is the value of the microphone measurement in the direction of the ray going from the microphone (and camera) center through that pixel.
-The acoustics camera depends on the pyroomacoustics package. This package can be installed together with its dependencies in a Python 2.7 environment using the command:
pip install numpy==1.15.4 scipy==0.18 pillow==6 PyWavelets==0.4.0 \ networkx==1.8 matplotlib==2.0.0 scikit-image==0.14 \ pyroomacoustics==0.3.1
It would normally install itself in:
$HOME/.local/lib/python2.7/site-packages/pyroomacoustics -
The acoustics camera ROS node can be run as part of the simulator. For that, first set up the environment along the lines of:
export ASTROBEE_SOURCE_PATH=$HOME/astrobee/src export ASTROBEE_BUILD_PATH=$HOME/astrobee @@ -115,7 +115,7 @@roslaunch acoustics_cam acoustics_cam.launch output:=screen
The acoustics camera can be run without ROS as:
$ISAAC_WS/src/astrobee/simulation/acoustics_cam/nodes/acoustics_cam debug_mode
In that case it assumes that the robot pose is the value set in the field "debug_robot_pose" in acoustics_cam.json (see below). In this mode it will only create a plot of the acoustics cam image. The sources of sounds will be represented as crosses in this plot, and the camera (microphone) position will be shown as a star.
-The acoustics camera subscribes to
/loc/truth/pose
to get the robot pose. It publishes its image, camera pose, and camera intrinsics on topics:
/hw/cam_acoustics @@ -123,7 +123,7 @@/sim/acoustics_cam/info
By default, the camera takes pictures as often as it can (see the configuration below), which is rarely, in fact, as it is slow. It listens however to the topic:
/comm/dds/command
for guest science commands that may tell it to take a single picture at a specific time, or to take pictures continuously. Such a command must use the app name "gov.nasa.arc.irg.astrobee.acoustics_cam_image" (which is the "s" field in the first command argument) for it to be processed.
-The behavior of this camera is described in:
$ISAAC_WS/src/astrobee/simulation/acoustics_cam/acoustics_cam.json
It has the following entries:
diff --git a/html/analyst.html b/html/analyst.html index e204d4f9..19940c41 100644 --- a/html/analyst.html +++ b/html/analyst.html @@ -90,34 +90,34 @@The jupyter notebooks will be able to access data that is in the $HOME/data
and $HOME/data/bags
, therefore, make sure all the relevant bag files are there
For the Analyst notebook to be functional, it needs to start side-by-side with the database and the IUI (ISAAC user interface). To do so, the recommended method is to use the remote docker images, as:
$ISAAC_SRC/scripts/docker/run.sh --analyst --no-sim --remote
The ISAAC UI is hosted in: http://localhost:8080 The ArangoDB database is hosted in: http://localhost:8529 The Analyst Notebook is hosted in: http://localhost:8888/lab?token=isaac
-Please follow all the tutorial to familiarize yourself with the available functions and to detect if something is not working properly.
-Open the tutorial here.
This tutorial covers how to upload bag files to a local database. Be aware that uploading large bag files might take a long time. If possible select only the time intervals/topic names that are required for analysis to speed up the process.
-Open the tutorial here.
This tutorial covers how to display data uploaded to the database. It contains some examples of the most common data type / topics. You can filter the data that gets collected from the database using queries.
-Open the tutorial here.
This tutorial covers the available methods to visualize data in the ISAAC user interface (IUI).
Open the IUI 3D viewer here.
-Open the tutorial \hrefhere.
Here, we use simulation tools to automatically build a train and test dataset. The simulation dataset builder uses arguments as target position model positions and gaussian noise to build. Using the simulated data, we use pytorch to train the classifier of a previously trained CNN. We optimize the CNN using the train dataset, and use the test dataset to decide which iteration of the optimization to keep. With the trained CNN we can run new colledted data through it, namely real image captured data.
-Open the tutorial
The Image anomaly detector contains a set of tools to analyse incoming images, using Convolutional Neural Networks, CNN's. To build, train and test the CNN's we use PyTorch.
-This package is needed in the anomaly/img_analysis node, such that we can analyse the image, looking for anomalies. The first step is to download the LibTorch ZIP archive, the link might change, best to go to https://pytorch.org/ and select Linux->LibTorch->C++/Java
Important!: The link is the one labeled '(cxx11 ABI)'. If you select the '(Pre-cxx11 ABI)', it will break ROS:
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.5.0%2Bcpu.zip
It is advised to unzip the package into a general directory as '/usr/include'
unzip libtorch-shared-with-deps-latest.zip
To link the path, add this to your '$HOME/.bashrc' file:
export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:/path/to/libtorch/share/cmake/Torch -
The python code containing the CNN definition and training is in resources/vent_cnn.py
Parameters: data_dir - path to the dataset. The dataset should have the correct structure for data import. Should be the same as 'path_dataset' in the Get Training data arguments. classes - specify the image classes, each class should be a folder name in the test and train folder, default classes is ['free', 'obstacle', 'unknown']. Free meas that it detected a free vent, obstacle means that the vent contains an obstacle, unknown means that the vent was not detected. num_epochs - number of epochs to train, default 30 model_name - saved model name, default "model_cnn.pt" trace_model_name - saved traced model name, default "traced_model_cnn.pt"
-To get training data, a tool is available which will read the poses from a vents file and others file. The tool will change the robot's pose and take pictures automatically. For the should be activated when the simulation is spawned like so (should be spawned in an undocked position such that the dock simulation does not interfere with the manual set of the pose):
roslaunch isaac sim.launch pose:="10.5 -9 5 0 0 0 1"
To run the too:
rosrun img_analysis get_train_data -path_dataset $PATH_DATASET -vent_poses $VENT_POSES -other_poses $OTHER_POSES [OPTIONS]
Arguments: path_dataset - Path to where to save the datasets, mandatory to define. vent_poses - .txt file containing the vent poses other_poses - .txt file containing the other non-vent poses robot_dist - Robot's distance to vent, standard is 1m train_pics_per_vent - Number of pictures taken per vent/other for train data test_pics_per_vent - Number of pictures taken per vent/other for test data
-There is a script, analyse_img.py, in the resources/ folder, which takes as argument the path of a picture taken with the sci_cam, processing it and outputing the classification result. This algorithm is useful to make sure that the C++ API for Pytorch is working properly.
Parameters: image - path of image to analyse
-Signal Semantic image_anomaly Volumetric GMM Change Detection
This directory provides the cargo_tool
-This tool is used to initiate pickup and drop cargo actions.
To run the tool:
rosrun cargo cargo_tool -$ACTION [OPTIONS] diff --git a/html/demos_native.html b/html/demos_native.html index 6c3d9f00..28fb34a1 100644 --- a/html/demos_native.html +++ b/html/demos_native.html @@ -91,35 +91,35 @@
To run demos using docker containers, please see Docker Install. There you'll find instructions on how to run the containers and available demos.
-roslaunch isaac sim.launch dds:=false robot:=sim_pub rviz:=true -
The inspection node facilitates the robot to take inspect its surroundings, there are multiple modes to do so. If the robot is not already undocked, it will do so when the inspection command is executed. There are many costuymization options available to the inspection tool, so please check the help output with:
rosrun inspection inspection_tool -help -
Used to take a close-up picture of an area and analyses it with the image anomaly detection node:
rosrun inspection inspection_tool -anomaly
The robot will inspect the target defined in astrobee/behaviors/inspection/resources/inspection_iss.txt by default, which is a vent on entry of the JEM, bay1. The robot will generate the survey, go to the inspection point and take a picture with the sci camera. The incoming picture will be analysed by the image anomaly detector. In this case it will report back whether the analysed vent is free or obstructed. Note: if the image anomaly detector was not launched with the fsw, then it will only take the picture and skip the analysis.
Options include: target_distance (desired distance to target); target_size_x (target size x - width); target_size_y (target size y - height)
-Used to create a geometric model of an area (3D model with texture). Takes pictures at all the locations specified in the survey plan.
rosrun inspection inspection_tool -geometry
The robot will inspect the target defined in astrobee/behaviors/inspection/resources/geometry_iss.txt by default, which corresponds to the bay 5 in the JEM. The robot will go to all locations, and after stable stationkeep, will take a sci camera image. When the image is confirmed to have been received the robot moves forward to another station.
For instructions on how to analysed the abtained data recorded in a bagfile, go to Geometry mapper and streaming mapper.
-Used to create a volumetric model of a given signal.
rosrun inspection inspection_tool -volumetric
The robot will inspect the target defined in astrobee/behaviors/inspection/resources/volumetric_iss.txt by default, which corresponds to going around the JEM module. The robot stops at each station and then continue to the next.
To learn more about how to process this data, consult Volumetric Mapper. Data types that can be scoped though this method are signals such as wifi signal strength and RFID tags.
-Used to take pictures of a certain location that can be stitched into a panorama.
rosrun inspection inspection_tool -panorama
The robot will take pictures with the camera centered at the location defined in the survey file in astrobee/behaviors/inspection/resources/panorama_iss.txt. The inspection node generates the survey given the parameters provided or derived from the camera model, therefore the pose specified in the survey file is the panorama center and not each station coordinate. The robot will take pictures at each generated station similarly to the geometry mode.
Options include: h_fov (camera horizontal fov, default -1 uses camera matrix); max_angle (maximum angle (deg) to target); max_distance (maximum distance to target); min_distance (minimum distance to target); overlap (overlap between images); pan_max (maximum pan); pan_min (minimum pan).
-In simulation, it is possible to perform cargo transfer using Astrobee. To do so you will tave to spawn the cargo at a certain location, and send the commands to pick up and drop the cargo.
To spawn a cargo:
roslaunch isaac_gazebo spawn_object.launch spawn:=cargo pose:="11.3 -5.6 5.7 -0.707 0 0 0.707" name:=CTB_05_1070 diff --git a/html/docker.html b/html/docker.html index e7954428..636a832b 100644 --- a/html/docker.html +++ b/html/docker.html @@ -90,40 +90,40 @@Docker Install
Install docker tools: https://docs.docker.com/engine/install/ubuntu/
Install nvidia-docker (optional, to use GPU): https://github.com/NVIDIA/nvidia-docker.
-To run the demos, you can use the remote pre-built images hosted on Github and skip this section. If you want to build the docker images locally instead of pulling from the remote repository, use:
./build.sh [OPTIONS]
Before running this script, please check the available options and defaults with:
./build --help
The build script will automatically detect the current Ubuntu OS version and define the docker files variables UBUNTU_VERSION
, ROS_VERSION
, and PYTHON
accordingly. If a specific version is desired, the option –xenial, –bionic, and –focal is used for ubuntu 16.04, 18.04, and 20.04 docker images, respectively.
If you don't want to run mast or don't have access to it (not a public repository), the use the option –no-mast.
-To run the docker containers:
./run.sh [OPTIONS]
Before running this script, please check the available options and defaults with:
./run --help
Make sure the default paths are correct, if not configure those options. Read through the different optional modules to understand if it fits your purpose.
It will automatically detect the current Ubuntu OS version. If a specific version is desired, the option –xenial, –bionic, and –focal is used for ubuntu 16.04, 18.04, and 20.04 docker images, respectively.
Once the command is executed the host location of the modules launched will be printed. Open those paths on your favorite browser.
-To stop all of the containers, use:
scripts/docker/shutdown.sh -
There are currently 3 demos available to showcase some aspects of the ISAAC functionality.
Open http://127.0.0.1:8080
in a web browser to see what is happening. Use docker ps
to see the docker containers and use docker exec -it container_name /bin/bash
to get a shell in one.
Cancel with Ctrl+c and then run scripts/docker/shutdown.sh
to stop the demo.
./demos/trigger_anomaly.sh
This demo will trigger a C02 anomaly, the levels of C02 will start to increase. The mast detects the anomaly and sends astrobee to inspect a vent. Astrobee will undock, calculate the optimal inspection pose to observe the target and move towards that pose, replanning if any obstacle is found. When the robot has the vent of interest in sight, it will take a picture and run it through a trained CNN, identifying whether the vent is obstructed, free or inconclusive result. After inspection Astrobee will dock autonomously.
-./demos/trigger_geometric_mapping.sh
This demo will trigger a geometric mapping inspection event. The geometric mapper collects pictures from several poses and creates a 3d mesh of the ISS. The robot will undock, follow a trajectory taking pictures at the specified waypoints and dock again. For the geometric mapper, the trajectory followed is defined in astrobee/behaviors/inspection/resources/geometry_iss.txt. The geometric mapper will map a section of the jem containing bay 5.
-./demos/trigger_wifi_mapping.sh
This demo will trigger a volumetric mapping inspection event. The volumetric mapper collects information from an onboard sensor of Astrobee and interpolates the data in a specified area. The robot will undock, follow a trajectory and dock again. For the wifi mapper, the trajectory followed is defined in astrobee/behaviors/inspection/resources/volumetric_iss.txt.
diff --git a/html/geometric_streaming_mapper.html b/html/geometric_streaming_mapper.html index 09d58d51..2665e4a0 100644 --- a/html/geometric_streaming_mapper.html +++ b/html/geometric_streaming_mapper.html @@ -90,19 +90,19 @@This document describes how to process and fuse depth and image data coming from an Astrobee robot. The two main tools are:
The following environmental variables should be set up (please adjust them for your particular configuration):
export ASTROBEE_WS=$HOME/astrobee export ASTROBEE_SOURCE_PATH=$ASTROBEE_WS/src export ISAAC_WS=$HOME/isaac -
This software makes use of three sensors that are mounted on the front face of the robot:
Based on this, and the transform between the nav and haz cameras, the haz cam depth readings are fused into a dense 3D model of the environment.
The sci cam is then used to texture this model. Alternatively, the nav cam pictures can be used for the texture as well.
An important note here is that the haz cam takes measurements about five times per second, the nav cam perhaps twice per second, while the science camera is triggered by an explicit command to take a picture, unless set in continuous picture-taking mode. This aspect of the sci cam will be elaborated later on.
-To be able to obtain high-fidelity results based on fusing the readings of these sensors, as described earlier, a lot of careful work needs to be done, that we discuss later in much detail. In particular, the intrinsics of all the cameras must be calibrated accurately, the transforms between their poses must be found, a sparse map based on the nav cam data must be computed and registered, and various conversions and interpolations (in time) between these sensors must be computed.
These problems are greatly simplified with simulated data, when we assume perfectly known cameras, including their intrinsics, how they relate to each other, and their poses in space at any time. Hence, working with simulated data makes it easy to test core aspects of this sensor fusion functionality in a simplified setting.
The drawback is that having two such sources of input data complicates the presentation of these tools. Some of them will be relevant only for one of the two data types, and some tools that apply to both may have different invocations for each case. The reader is advised to keep close attention to this, and we will make it clear at every step about which of the two paradigms one refers to.
-The simulator supports the nav_cam
, sci_cam
, haz_cam
cameras, which are analogous to the ones on the real robot, and also the heat_cam
and acoustics_cam
cameras which exist only in simulation. All these have been tested with the geometry mapper and streaming mapper.
The sci_cam
and haz_cam
cameras are not enabled by default in the simulator. To enable them, edit the simulation configuration, in
$ASTROBEE_SOURCE_PATH/astrobee/config/simulation/simulation.config @@ -130,7 +130,7 @@
More information about the simulated nav_cam, haz_cam, and sci_cam is at:
$ASTROBEE_SOURCE_PATH/simulation/readme.md
The heat camera is described in:
$ISAAC_WS/src/astrobee/simulation/isaac_gazebo/readme.md
The acoustics camera and how to enable it is documented at:
$ISAAC_WS/src/astrobee/simulation/acoustics_cam/readme.md -
For an actual camera, rather than a simulated one, the files:
$ASTROBEE_SOURCE_PATH/astrobee/config/cameras.config $ASTROBEE_SOURCE_PATH/astrobee/config/geometry.config @@ -146,17 +146,17 @@
To visualize images published by your camera in
rviz
, appropriate entities must be added iniss.rviz
, etc.Uncompressed or compressed images are supported, but for the latter adjustments must be made, mirroring
sci_cam
. For example, the image topic should be:/hw/cam_some/compressedexcept in
-iss.rviz
, where the suffix/compressed
is not needed, but instead the one setsTransport Hint: compressed
.+
Compiling the software
It is assumed that by now the Astrobee and ISAAC software is compiled.
This module depends on two additional pieces of software, Voxblox and CGAL.
-+
Compiling VoxBlox
To compile Voxblox, clone
https://github.com/oleg-alexandrov/voxblox/(branch isaac, comitt 9098a0f). This fork differs from the main repository at https://github.com/ethz-asl/voxblox by the introduction of a small tool named batch_tsdf.cc that reads the clouds to fuse and the transforms from disk and writes the output mesh back to disk, instead of using ROS. It also can take into account a point's reliability when fusing the clouds, which is computed by the geometry mapper.
Compile it using the instructions at:
https://github.com/oleg-alexandrov/voxblox/blob/master/docs/pages/Installation.rstThis should end up creating the program:
$HOME/catkin_ws/devel/lib/voxblox_ros/batch_tsdf -+
Compile the CGAL tools using the commands:
mkdir -p $HOME/projects cd $HOME/projects @@ -173,15 +173,15 @@
The outcome will be that some programs will be installed in:
$HOME/projects/cgal_toolswhich will be later looked up by the geometry mapper.
-+
CGAL license
CGAL is released under the GPL. Care must be taken to not include it or to link to it in any ISAAC code. Using CGAL as standalone tools does not infringe upon GPL.
-+
Functionality provided by CGAL
The tools just compiled are used for smoothing meshes, filling holes, remeshing, removing small connected components, and simplifying the mesh. The geometry mapper can work even without these, but they produce nicer results. The geometry mapper uses all of them except the experimental remeshing tool.
-+
Data acquisition
-+
With real data
Acquire a bag of data on the bot. The current approach is to use a recording profile. A step-by-step procedure is outlined below if a recording profile has not been set up.
First give the bot the ability to acquire intensity data with the depth camera (haz_cam). For that, connect to the MLP processor of the bot. Edit the file:
/opt/astrobee/config/cameras.config @@ -200,10 +200,10 @@/hw/depth_haz/extended/amplitude_int /hw/cam_sci/compressed
Scan a wall or a larger area with the bot facing the wall as much as possible, and carefully scan objects jutting out of the wall. Later one can also acquire images with various camera positions and orientations, which help build an accurate sparse map, but those should not be used when fusing the depth clouds or in texturing.
Copy the resulting bag off the robot.
-+
With simulated data
The astrobee simulator supports a handful of cameras, mentioned earlier in the text.
-+
Recording simulated data
Start the simulator, such as:
source $ASTROBEE_WS/devel/setup.bash source $ISAAC_WS/devel/setup.bash @@ -220,7 +220,7 @@
to the
rosbag record
command.The robot can be told to move around by either running a plan, or by sending it a move command, such as:
rosrun mobility teleop -move -pos "11.0 -5.0 5.0" -tolerance_pos 0.0001 -att "0 0 0 1"when it will go along the module axis while not changing its orientation.
-+
Data pre-processing
This applies only to real data.
If the recorded data is split into many small bags, as it often happens on the ISS, those bags should be first merged as documented in:
$ASTROBEE_SOURCE_PATH/localization/sparse_mapping/readme.md @@ -236,7 +236,7 @@
To accomplish this processing, once the sci cam data is integrated into the bag, one can do the following:
$ISAAC_WS/devel/lib/geometry_mapper/scale_bag --input_bag input.bag \ --output_bag output.bag --image_type grayscale --scale 0.25Note that the processed sci cam images will be now on topic
-/hw/cam_sci2
.+
Camera calibration
Currently the calibrator solution is not that accurate. It is suggested to use instead camera_refiner (see further down) on a bag acquired without a calibration target.
Camera calibration is an advanced topic. Likely your robot's cameras have been calibrated by now, and then this step can be skipped.
@@ -391,7 +391,7 @@
As before, one better not use the option
--timestamp_offset_sampling
unless one is sure it is necessary.Note that this time we optimize the intrinsics of cam1 (sci_cam) and we do not use
--update_depth_to_image_transform
or optimize the intrinsics of cam2 (haz_cam) as this was already done earlier. We do not optimize the distortion of cam1 as that can result in incorrect values if there are not enough measurements at image periphery. The distortion is best optimized with the camera refiner (see below).Optionally, one can try to further refine the calibration if not good enough when used with data collected while the bot is flying around doing inspection. This is an advanced topic which is handled further down this document, in the section on camera refinement.
-+
Data extraction
Nav cam images can be extracted from a bag as follows:
$ASTROBEE_WS/devel/lib/localization_node/extract_image_bag \ mydata.bag -image_topic /mgt/img_sampler/nav_cam/image_record \ @@ -404,7 +404,7 @@
ca To extract the depth clouds, which may be useful for debugging purposes, do:
$ISAAC_WS/devel/lib/geometry_mapper/extract_pc_from_bag mydata.bag \ -topic /hw/depth_haz/points -output_directory pc_data \ -use_timestamp_as_image_name -+
Build and register a SURF sparse map with the nav cam images. (This is needed only with real data.) See the sparse mapping documentation in the Astrobee repository, with more details given in the map building page.
Don't forget to set:
export ASTROBEE_ROBOT=<robot name>
and the other environmental variables from that document before running map-building.
It is very important to keep in the map at least one nav cam image whose timestamp is a couple of seconds before the sci cam image that we would like to later overlay on top of the mesh created with with the help of this map, and one nav cam image a couple of seconds after the last sci cam image we would like to keep, to ensure we process all sci cam images with the geometry mapper.
-The geometry mapper fuses the depth cloud data and creates textures from the image cameras.
Any image camera is supported, as long as present in the robot configuration file and a topic for it is in the bag file (see more details further down). The geometry mapper can handle both color and grayscale images, and, for sci cam, both full and reduced resolution.
@@ -605,7 +605,7 @@The geometry mapper can run with a previously created mesh if invoked with the option --external_mesh
.
The most time-consuming part of the geometry mapper is computing the initial poses, which is the earliest step, or step 0. To resume the geometry mapper at any step, use the option --start_step num
. For example, one may want to apply further smoothing to the mesh or more hole-filling, before resuming with the next steps.
For a given camera type to be textured it must have entries in cameras.config
and the robot config file (such as bumble.config
), which are analogous to existing nav_cam_to_sci_cam_timestamp_offset
, nav_cam_to_sci_cam_transform
, and sci_cam
intrinsics, with "sci" replaced by your camera name. The geometry mapper arguments --camera_types
, --camera_topics
, and --undistorted_crop_wins
must be populated accordingly, with some careful choice to be made for the last one. Images for the desired camera must be present in the bag file at the the specified topic.
The geometry mapper works with any simulated cameras not having distortion. It was tested to work with simulated images for sci_cam
, haz_cam
, heat_cam
, and acoustics_cam
. It does not work with nav_cam
, which has distortion.
The flag:
--simulated_data @@ -628,13 +628,13 @@--angle_between_processed_cams 5.0 \ --verbose
It is important to check for the correct names for the camera image topics are passed to --camera_topics
.
Here we assume that the geometry mapper ran and created a dense 3D model of the region of interest. Then, the robot is run, whether in simulation or in the real world. It records images with a camera, which can be sci cam, nav cam, etc. (see the complete list below), that we call the "texture images".
The streaming mapper ROS node will then overlay each texture image received with this camera on top of the 3D model, and publish the obtained textured model to be visualized.
-To run the streaming mapper with real data for the given bot, do:
source $ASTROBEE_WS/devel/setup.bash source $ISAAC_WS/devel/setup.bash @@ -669,7 +669,7 @@
The recording should start before the input bag is played. The -b
option tells ROS to increase its recording buffer size, as sometimes the streaming mapper can publish giant meshes.
The robot pose that the streaming mapper needs assumes a very accurate calibration of the IMU sensor in addition to the nav, haz, and sci cam sensors, and very accurate knowledge of the pose of these sensors on the robot body. If that is not the case, it is suggested to use the nav cam pose via the nav_cam_pose_topic
field in streaming_mapper.config (set it to /loc/ml/features
), for which only accurate calibration of the nav, sci, and haz cameras among each other is assumed, while the ekf_pose_topic
must be set to an empty string.
The input texture can be in color or grayscale, at full or reduced resolution, and compressed or not.
-If no robot body or nav cam pose information is present, for example, if the EKF or localization node was not running when the image data was acquired, or this data was not recorded or was not reliable, the localization node can be started together with the streaming mapper, and this node will provide updated pose information.
Edit streaming_mapper.config
and set nav_cam_pose_topic
to /loc/ml/features
and let ekf_state_topic
be empty.
Above the /loc/ml/features and /gnc/ekf topics which may exist in the bag are redirected to temporary topics, since the currently started localization node will create new camera pose information.
The --clock
option should not be missed.
Then enable the localization node by running in a separate terminal:
rosservice call /loc/ml/enable true -
The streaming_mapper.config
file has following fields:
For simulated data the usage is somewhat different. First, simulation.config needs to be edited as described earlier in the document to turn on the simulated sci cam, haz cam, or other desired camera.
When working with ISS data, more specifically the JPM module, do:
export ASTROBEE_WORLD=iss @@ -736,13 +736,13 @@
Hence, the parameters ekf_state_topic
, ekf_pose_topic
, and nav_cam_pose_topic
are ignored.
The streaming mapper will publish its results on topics mentioned earlier in the text.
Note that value of ASTROBEE_ROBOT
is not needed in this case. Any user-set value will be overwritten with the robot name sim
.
The calibration done with the calibration target can still leave some residual registration error between the cameras, which manifests itself as discrepancies in the geometry mapper and streaming mapper products, and between the nav cam and sci cam textures.
Once a dataset of the robot flying around and performing inspections is acquired, so in realistic conditions, rather than with a calibration target, it can be used to further refine the camera calibration file, including the intrinsics and extrinsics.
The calibration step above can be avoided altogether, and this robot's desired transforms to be refined can be initialized with values from a different robot or with the placeholder values already present in a given robot's config file.
To avoid issues with accuracy of the timestamps of the images, we assume that the robot is paused, or otherwise moves very slowly, during each sci cam shot. Then, to refine the camera calibration, the following approach should be taken.
-Select a set of nav cam images shortly before and after each sci cam image using the image_picker tool:
export ASTROBEE_RESOURCE_DIR=$ASTROBEE_SOURCE_PATH/astrobee/resources export ASTROBEE_CONFIG_DIR=$ASTROBEE_SOURCE_PATH/astrobee/config @@ -766,7 +766,7 @@
One could start by running this tool with the –max_dist_between_images option, wiping many redundant images while ensuring there is good overlap among them and keeping pairs of similar images, then running this tool one more time without this option to ensure the bracketing images for each sci cam image are added back.
The option –left_bracket_only will generate the left brackets only, this might be useful if you want to minimize images in the map-making process, but be sure to merge the remaining images in a later stage.
To save a log file with with a specific name containing the generated pictures path use:
--logfile <str> -+
Build a sparse map with these images. Use the same environment as above:
dir=nav_images images=$(ls $dir/*jpg) @@ -798,7 +798,7 @@--input_map merged.map --output_map ${dir}_surf_reg.map \ ${dir}/*jpg
Here, $dir points to nav_images as earlier in the document.
-Next, the refiner tool is run, as shown below. This will overwrite the camera calibration file, so it may be prudent to start by copying the existing calibration file to a new name, and set ASTROBEE_ROBOT to point to that.
export ASTROBEE_RESOURCE_DIR=$ASTROBEE_SOURCE_PATH/astrobee/resources export ASTROBEE_CONFIG_DIR=$ASTROBEE_SOURCE_PATH/astrobee/config @@ -848,10 +848,10 @@
A source of errors (apart from inaccurate intrinsics, extrinsics, or insufficiently good modeling of the cameras) can be the nav_cam_to_sci_cam_timestamp_offset, which can be non-zero if the HLP and MLP/LLP processors are not synchronized (the sci_cam pictures are acquired with the HLP and nav_cam with MLP/LLP). If this value is not known well, this tool can be run with zero or more iterations and various values of:
--nav_cam_to_sci_cam_offset_override_value <val>to see which value gives the smallest residuals.
If the
---out_texture_dir
option is specified, the tool will create textured meshes for each image and optimized camera at the end. Ideally those textured meshes will agree among each other.+
The algorithm
See camera_refiner.cc for a lengthy explanation of the algorithm.
-+
Camera refiner options
This program's options are:
--ros_bag (string, default = "") A ROS bag with recorded nav_cam, haz_cam intensity, @@ -1061,14 +1061,14 @@--verbose (bool, false unless specified) Print the residuals and save the images and match files. Stereo Pipeline's viewer can be used for visualizing these. -
+
The camera refiner supports using a radtan distortion model for nav_cam, that is a model with radial and and tangential distortion, just like for haz_cam and sci_cam, but the default nav_cam distortion model is fisheye. One can edit the robot config file and replace the fisheye model with a desired radial + tangential distortion model (4 or 5 coefficients are needed) then run the refiner.
Since it is not easy to find a good initial set of such coefficients, the refiner has the option of computing such a model which best fits the given fisheye model. For that, the refiner is started with the fisheye model, this model is used to set up the problem, including triangulating the 3D points after feature detection, then the fisheye model is replaced on-the-fly with desired 4 or 5 coefficients of the radtan model via the option –nav_cam_distortion_replacement, to which one can pass, for example, "0 0 0 0". These coefficients will then be optimized while keeping the rest of the variables fixed (nav cam focal length and optical center, intrinsics of other cameras, and all the extrinsics). The new best-fit distortion model will be written to disk at the end, replacing the fisheye model, and from then on the new model can be used for further calibration experiments just as with the fisheye model.
It may however be needed to rerun the refiner one more time, this time with the new distortion model read from disk, and still keep all intrinsics and extrinsics (including the sparse map and depth to image) fixed, except for the nav cam distortion, to fully tighten it.
Since it is expected that fitting such a model is harder at the periphery, where the distortion is stronger, the camera refiner has the option --nav_cam_num_exclude_boundary_pixels
can be used to restrict the nav cam view to a central region of given dimensions when such such optimization takes place (whether the new model type is fit on the fly or read from disk when already determined). If a satisfactory solution is found and it is desired to later use the geometry mapper with such a model, note its option --undistorted_crop_wins
, and one should keep in mind that that the restricted region specified earlier may not exactly be the region to be used with the geometry mapper, since the former is specified in distorted pixels and this one in undistorted pixels.
All this logic was tested and was shown to work in a satisfactory way, but no thorough attempt was made at validating that a radtan distortion model, while having more degrees of freedom, would out-perform the fisheye model. That is rather unlikely, since given sufficiently many images with good overlap, the effect of the peripheral region where the fisheye lens distortion may not perform perfectly may be small.
-Given a camera image, its pose, and a mesh, a useful operation is to create a textured mesh with this camera. While the geometry mapper can create textured meshes as well, this tool does so from individual images rather than fusing them. It uses the logic from the streaming mapper instead of texrecon which is used by the geometry mapper.
A geometry mapper run directory has all the inputs this tool needs. It can be run as follows:
export ASTROBEE_SOURCE_PATH=$HOME/projects/astrobee/src diff --git a/html/gmm.html b/html/gmm.html index 0176ccfd..916a53a8 100644 --- a/html/gmm.html +++ b/html/gmm.html @@ -90,13 +90,13 @@GMM Change Detection
This implementation of a GMM-based anomaly detection algorithm was created by Jamie Santos, for the purposes of a [Master thesis](). This algorithm is able to detect changes on environments such as the ISS using 3D point depth cloud data.
-pip3 install pulp pip3 install scikit-learn pip3 install pyntcloud pip3 install pandas pip3 install open3d apt-get install glpk-utils apt-get install ros-noetic-ros-numpy
-rosrun gmm gmm_change_detection.py
This directory provides two tools: inspection_tool and sci_cam_tool.
-This tool is used to initiate inspection actions. To run the tool:
rosrun inspection inspection_tool -$ACTION [OPTIONS]
General parameters
volumetric: This will perform a volumetric survey
An example command for this type of anomaly would be:
rosrun inspection inspection_tool -volumetric -volumetric_poses /resources/volumetric_iss.txt -
This tool is used to control the sci cam plugin in the Astrobee simulator, more precisely the way it acquires pictures. To use it, perform the following steps:
Start the simulator, for example as:
roslaunch astrobee sim.launch @@ -177,7 +177,7 @@
The pictures will be published on the topic
/hw/cam_sci/compressedThey will also show up in the sci cam window in RVIZ.
If requests to take a single picture come at a high rate, some of them will be dropped.
-+
Using export panorama tool
This tool was created to allow for panorama surveys to be created and exported. This is useful to make panorama plans beforehand to ensure reproduceability.
To export the panorama file:
rosrun inspection export_panorama -panorama_poses $PANORAMA_POSES -panorama_out $OUTPUL_PLAN diff --git a/html/inspection.js b/html/inspection.js index a54df103..d6dba280 100644 --- a/html/inspection.js +++ b/html/inspection.js @@ -1,17 +1,17 @@ var inspection = [ [ "Panorama coverage planning", "pano_coverage.html", [ - [ "Relevant files", "pano_coverage.html#autotoc_md56", null ], - [ "Panorama design approach", "pano_coverage.html#autotoc_md57", null ], - [ "ISAAC panorama survey parameters", "pano_coverage.html#autotoc_md58", [ - [ "Using the inspection tool", "inspection.html#autotoc_md64", null ], - [ "Using sci_cam_tool", "inspection.html#autotoc_md65", null ], - [ "Using export panorama tool", "inspection.html#autotoc_md66", null ], - [ "5_mapper_and_hugin", "pano_coverage.html#autotoc_md59", null ], - [ "4_mapper", "pano_coverage.html#autotoc_md60", null ], - [ "6_nav_hugin", "pano_coverage.html#autotoc_md61", null ] + [ "Relevant files", "pano_coverage.html#autotoc_md57", null ], + [ "Panorama design approach", "pano_coverage.html#autotoc_md58", null ], + [ "ISAAC panorama survey parameters", "pano_coverage.html#autotoc_md59", [ + [ "Using the inspection tool", "inspection.html#autotoc_md65", null ], + [ "Using sci_cam_tool", "inspection.html#autotoc_md66", null ], + [ "Using export panorama tool", "inspection.html#autotoc_md67", null ], + [ "5_mapper_and_hugin", "pano_coverage.html#autotoc_md60", null ], + [ "4_mapper", "pano_coverage.html#autotoc_md61", null ], + [ "6_nav_hugin", "pano_coverage.html#autotoc_md62", null ] ] ], - [ "Validation", "pano_coverage.html#autotoc_md62", null ], - [ "Camera field of view estimation", "pano_coverage.html#autotoc_md63", null ] + [ "Validation", "pano_coverage.html#autotoc_md63", null ], + [ "Camera field of view estimation", "pano_coverage.html#autotoc_md64", null ] ] ] ]; \ No newline at end of file diff --git a/html/md_INSTALL.html b/html/md_INSTALL.html index 79bf0970..e927b30d 100644 --- a/html/md_INSTALL.html +++ b/html/md_INSTALL.html @@ -96,44 +96,51 @@Note: You will need 4 GBs of RAM to compile the software. If you don't have that much RAM available, please use swap space.
Note: Please ensure you install Ubuntu 16.04, 18.04 or 20.04. At this time we do not support any other operating system or Ubuntu versions.
Note: Please ensure you install the 64-bit version of Ubuntu. We do not support running ISAAC Software on 32-bit systems.
-The
+isaac
repo depends on someastrobee
packages, therefore,astrobee
needs to be installed beforehand.The
isaac
repo depends on someastrobee
packages, therefore,astrobee
needs to be installed beforehand. See the Astrobee Robot Software Installation Instructions for detailed setup instructions.Checkout the project source code
-At this point you need to decide where you'd like to put the ISAAC workspace and code (
ISAAC_WS
) on your machine (add this to your .bashrc for persistency):export ISAAC_WS=$HOME/isaac -First, clone the flight software repository:
git clone --recursive https://github.com/nasa/isaac.git \ ---branch develop $ISAAC_WS/src/ -Checkout the submodule:
git submodule update --init --recursive +At this point you need to decide where you'd like to put the ISAAC workspace and code (
ISAAC_WS
) on your machine, add this to your.bashrc
or.zshrc
for persistence:export ISAAC_WS=$HOME/isaac +First, clone the flight software repository:
git clone --recursive https://github.com/nasa/isaac.git --branch develop $ISAAC_WS/src +Checkout the submodule:
pushd $ISAAC_WS/src +git submodule update --init --recursive +popdDependencies
Next, install all required dependencies: Note:
-root
access is necessary to install the packages below Note: Before running this please ensure that your system is completely updated by running 'sudo apt-get update' and then 'sudo apt-get upgrade'pushd $ISAAC_WS/src cd scripts/setup ./install_desktop_packages.sh ./build_install_dependencies.sh sudo rosdep init rosdep update popd
+pushd $ISAAC_WS/src/scripts/setup ./install_desktop_packages.sh ./build_install_dependencies.sh sudo rosdep init rosdep update popd
Configuring the build
-By default, the catkin uses the following paths:
$ISAAC_WS/devel
$ISAAC_WS/install
Source your astrobee build environment, for example as:
source $ASTROBEE_WS/devel/setup.bash -
The configure script prepares your build directory for compiling the code. Note that configure.sh
is simply a wrapper around CMake that provides an easy way of turning on and off options. To see which options are supported, simply run configure.sh -h
.
pushd $ASTROBEE_WS +
The configure script prepares your build directory for compiling the code. Note that configure.sh
is simply a wrapper around CMake that provides an easy way of turning on and off options. To see which options are supported, simply run configure.sh -h
.
pushd $ISAAC_WS ./src/scripts/configure.sh -l source ~/.bashrc popd -
The configure script modifies your .bashrc
to source setup.bash
for the current ROS distribution and to set CMAKE_PREFIX_PATH. It is suggested to examine it and see if all changes were made correctly.
If you want to explicitly specify the workspace and/or install directories, use instead:
./scripts/configure.sh -l -p $INSTALL_PATH -w $WORKSPACE_PATH -
Note: If a workspace is specified but not an explicit install distectory, install location will be $WORKSPACE_PATH/install.
-To build, run catkin build
in the $WORKSPACE_PATH
. Note that depending on your host machine, this might take in the order of tens of minutes to complete the first time round. Future builds will be faster, as only changes to the code are rebuilt, and not the entire code base.
pushd $ASTROBEE_WS +
If you run a Zsh session, then
pushd $ISAAC_WS +./src/scripts/configure.sh -l +source ~/.zshrc +popd +
By default, the catkin uses the following paths:
$ISAAC_WS/devel
$ISAAC_WS/install
If you want to explicitly specify the workspace and/or install directories, set $WORKSPACE_PATH
and $INSTALL_PATH
to the desired paths and use the -p
ad -w
flags as shown: $ISAAC_WS/src/scripts/configure.sh -l -p $INSTALL_PATH -w $WORKSPACE_PATH +
Note: If a workspace is specified but not an explicit install directory, install location will be $WORKSPACE_PATH/install.
+The configure script modifies your .bashrc
/.zshrc
to source setup.bash
/setup.zsh
for the current ROS distribution and to set CMAKE_PREFIX_PATH. It is suggested to examine it and see if all changes were made correctly.
To build, run catkin build
in the $WORKSPACE_PATH
. Note that depending on your host machine, this might take in the order of tens of minutes to complete the first time round. Future builds will be faster, as only changes to the code are rebuilt, and not the entire code base.
pushd $ISAAC_WS catkin build popd
If you are working in simulation only, then you're all done! The next steps are only for running ISAAC onboard Astrobee.
To cross-compile ISAAC, one must first cross compile the astobee code using the NASA_INSTALL instructions. Note that ASTROBEE_WS
must be defined!!!
To cross-compile ISAAC, one must first cross compile the astobee code using the NASA_INSTALL instructions. Note that ASTROBEE_WS
and ARMHF_CHROOT_DIR
must be defined!
Cross compiling for the robot follows the same process, except the configure script takes a -a
flag instead of -l
.
pushd $ISAAC_WS ./src/scripts/configure.sh -a popd -
Or with explicit build and install paths:
./scripts/configure.sh -a -p $INSTALL_PATH -w $WORKSPACE_PATH +
Or with explicit build and install paths:
pushd $ISAAC_WS +./src/scripts/configure.sh -a -p $INSTALL_PATH -w $WORKSPACE_PATH +popd
Warning: $INSTALL_PATH
and $WORKSPACE_PATH
used for cross compiling HAVE to be different than the paths for native build! See above for the default values for these.
Once the code has been built, it also installs the code to a singular location. CMake remembers what $INSTALL_PATH
you specified, and will copy all products into this directory.
Here, p4d is the name of the robot, which may be different in your case.
To build a debian you must first confirm that cross-compiling is functional. Once it is:
./src/scripts/build/build_debian.sh +To build a debian you must first confirm that cross-compiling is functional. Once it is:
pushd $ISAAC_WS +./src/scripts/build/build_debian.sh +popd ++Switching build profiles
+To alternate between native and armhf (cross-compile) profiles:
catkin profile set native +catkin profile set armhf
A panorama coverage plan is a sequence of image center pan/tilt values. The objective of panorama coverage planning is to generate a plan that completely covers the specified range of pan/tilt values with sufficient image-to-image overlap to permit downstream processing (e.g., Hugin panorama stitching), and is sufficiently robust to attitude error. Within those constraints, we want to optimize the plan for minimum image count, as a proxy for minimum run time.
-test_pano
tool and produces plots for debugging.Panorama planning starts from the concept of a rectangular grid of image centers, evenly spaced so as to completely cover the specified (rectangular) imaging area with the desired overlap and attitude tolerance.
The collection order of images in the grid follows a column-major raster pattern: alternating columns are collected top-to-bottom and bottom-to-top. Column-major is preferred because it makes it easier for crew to stay behind the robot during panorama collection, if they care to do so. Following an alternating raster pattern minimizes large attitude changes that are challenging for Astrobee localization.
@@ -114,10 +114,10 @@The primary effect of the warping is to make the effective image coverage wider near the poles. We take advantage of this effect by reducing the number of images in grid rows near the poles. A downside of reducing the image count is that the images no longer form a grid, so the column-major raster sequencing is only approximate (Fig. 2).
A secondary effect of the warping is that it complicates determining how to position the warped rectangles of individual image coverage so that together they cover the boundaries of the rectangular desired imaging area. As a result, although the panorama planner's simple heuristic image spacing algorithm tries to meet the coverage and overlap requirements, it can not guarantee they are satisfied in general. Instead, you are encouraged to use the test_pano
tool and plot_pano.py
script together to check correctness, and if there is a problem, inflate the plan_attitude_tolerance_degrees
parameter (used by the test_pano
tool at planning time) while leaving unchanged the test_attitude_tolerance_degrees
parameter (used by the plot_pano.py
tool at testing time), until the problem is corrected.
There are different styles of panorama that could be useful for different applications.
-The parameters in this test case are recommended as a potential "workhorse" panorama type for doing complete module surveys. The design criteria were:
5_mapper_and_hugin
sequence This test case examines the scenario of relaxing the Hugin auto-stitch requirement #2 above. In that case, requirement #5 becomes the driving requirement. Because the panorama motion is vertical and HazCam images are acquired continuously at ~5 Hz, the HazCam vertical spacing is not a driving constraint, even though the HazCam has a smaller FOV than the SciCam. As a result, we specify the VFOV from the SciCam, the HFOV from the the HazCam, and as a bit of a hack, we pad the tilt radius slightly to make doubly sure the HazCam gets complete coverage near the poles, despite its smaller VFOV.
The resulting panorama plan has far fewer images than 5_mapper_and_hugin
, 30 vs. 56, which is attractive. The downsides are that it may not be compatible with Hugin auto-stitch (although it may be feasible to pass pan/tilt parameters from NavCam bundle adjustment to Hugin instead), and probably more importantly, it would be less robust to excessive robot pointing error. If time permits, it might be useful to try to capture a panorama in this more aggressive mode to evaluate whether the data is sufficient for downstream analysis.
This test case examines the scenario of relaxing both requirements #2 and #5 above, so that the NavCam overlap requirement #4 becomes the driving requirement. HazCam and SciCam coverage would be incomplete, so the geometry mapper could not build a full 3D mesh, but the resulting NavCam imagery could be used to build a low-resolution NavCam panorama with Hugin auto-stitch.
The resulting panorama plan has only 15 images. This type of panorama could occasionally be suitable for a fast low-resolution survey.
-The following shell commands can be used to validate the panorama planner on the test cases:
As of this writing, all test cases in pano_test_cases.csv
pass with the pano_orientations2()
planner.
During the validation process, plot_pano.py
also writes several plots for each test case that can be used to visualize the resulting panorama plan.
Modeling the field of view of a camera is complicated. The Astrobee cameras of interest for panorama planning each have a rectangular sensor and a lens with radial distortion. As a result, when the shape of the camera FOV is displayed in a standard equirectangular projection, even at tilt = 0, the FOV shape is not actually a rectangle, but instead has a curved shape with "spikes at the corners". For the purposes of panorama planning, the radial distortion effect is very significant for the NavCam, somewhat significant for the HazCam, and almost negligible for the SciCam.
The true camera FOV shape is both complicated to calculate and difficult to use for panorama planning purposes. As a result, our tools model the FOV as a simplified rectangle. In particular, the rectangle dimensions we use, as output by field_of_view_calculator.py
, are an estimate of the inscribed rectangle, i.e., the largest (axis-aligned) rectangle that fits completely within the true FOV shape. This rectangular approximate FOV is currently used both during panorama planning and validation. This is a conservative approach in that it will underestimate the true coverage and overlap in the panorama.
The isaac
folder is the primary entry point into flight software. For example, if you run roslaunch isaac <launch_file>
you are instructing ROS to examine the 'launch' directory in this folder for a XML file called <launch_file>
, which describes how to start a specific experiment.
resources
- A directory containing all non-LUA resources used by nodes in the system. Additional files might be needed depending on the context.scripts
- A simple bash script to print out the environment variables, which can be used to check the context at any point in the launch sequence.ISAAC_RESOURCE_DIR
: An absolute path to all non-LUA system resources that are used by the nodes in the system. For example, this includes sparse maps, zone files, clutter maps, etc.ISAAC_CONFIG_DIR
: An absolute path to all LUA config files that store input parameters for the nodes in the system.When launched, nodes must know the context in which they are being launched in order to operate correctly. By context, we mean (a) the robot class being run, (b) the world in which the robot is being run, and (c) paths to both a LUA config and a resource directory. You have flexibility in how this is specified, but note that we enforce the following strict precedence:
/etc > environment variable > roslaunch arguments > default roslaunch values
For example, consider this launch process run on your local desktop (in this case there will be no /etc/robotname
file set by default)
export ASTROBEE_ROBOT=p4d @@ -121,14 +121,14 @@-> [astrobee] -> ... [*]
At the package level, ilp nodes inherit from ff_nodelet, and are launched using a pattern defined by ff_nodelet.launch. This pattern respects our context determination hierarchy. glp nodes do not inherit from ff_nodelet, and are meant for general robot functionality.
-Assuming no environment variables are set or /etc
files are created, the default contexts defined by the launch files are the following:
isaac_astrobee.launch
: robot = {argument}, world = iss, drivers = truesim.launch
: robot = sim, world = granite, drivers = falseIt is possible to launch the GLP, ILP, MLP, LLP and simulator on remote devices. The corresponding arguments are glp:=<ip>
, ilp:=<ip>
, mlp:=<ip>
, llp:=<ip>
and sim:=<ip>
. There are two special values for <ip>
:
The {llp,mlp,sim} arguments with {IP,local,disabled} options is very powerful and supports any configuration of robot with simulated nodes or hardware in the loop.
-It is possible to specify on the command line the set of nodes to be launched using the nodes:=<comma_separated_list_of_nodes>
argument.
In this case, only the provided nodes will be launched on their destination processors (llp or mlp). In addition, it is possible to avoid roslaunch to perform any connection to a particular processor with the declaration {llp,mlp}:=disabled
. This is particularly useful if you need to test some nodes on one processor and do not have access to the other processor.
For example, to test only the picoflexx cameras on the MLP, not attempting connection to the LLP (in case it is absent from the test rig):
roslaunch isaac isaac_astrobee.launch llp:=disabled nodes:=pico_driver,framestore -
Start a local iss simulation with one p4d robot on namespace '/'
diff --git a/html/shared.html b/html/shared.html index bbba1aa2..9c5317fc 100644 --- a/html/shared.html +++ b/html/shared.html @@ -91,7 +91,7 @@Simulation contains the packages where the dense maps are built
This page describes the isaac gazebo plugins.
-This plugin simulates a heat-detecting camera, with the color in the images it produces suggestive of the temperature on the surface of the 3D environment seen in the camera.
The file
diff --git a/html/subsystems.js b/html/subsystems.js index 3e8244ed..9cf9526c 100644 --- a/html/subsystems.js +++ b/html/subsystems.js @@ -4,15 +4,15 @@ var subsystems = [ "Dense Map", "idm.html", "idm" ], [ "Anomaly Detector", "ano.html", "ano" ], [ "Analyst Notebook", "analyst.html", [ - [ "Starting the Analyst Notebook", "analyst.html#autotoc_md37", null ], - [ "Intro Tutorials", "analyst.html#autotoc_md38", [ - [ "1) Import Bagfile data to database (optional if using remote database)", "analyst.html#autotoc_md39", null ], - [ "2) Read data from the Database", "analyst.html#autotoc_md40", null ], - [ "3) Export results to ISAAC user interface", "analyst.html#autotoc_md41", null ] + [ "Starting the Analyst Notebook", "analyst.html#autotoc_md38", null ], + [ "Intro Tutorials", "analyst.html#autotoc_md39", [ + [ "1) Import Bagfile data to database (optional if using remote database)", "analyst.html#autotoc_md40", null ], + [ "2) Read data from the Database", "analyst.html#autotoc_md41", null ], + [ "3) Export results to ISAAC user interface", "analyst.html#autotoc_md42", null ] ] ], - [ "Case study Tutorials", "analyst.html#autotoc_md42", [ - [ "Collecting simulation data to train a CNN + validate with ISS data", "analyst.html#autotoc_md43", null ], - [ "Building a volumetric map of WiFi signal intensity + analyse model + visualize data", "analyst.html#autotoc_md44", null ] + [ "Case study Tutorials", "analyst.html#autotoc_md43", [ + [ "Collecting simulation data to train a CNN + validate with ISS data", "analyst.html#autotoc_md44", null ], + [ "Building a volumetric map of WiFi signal intensity + analyse model + visualize data", "analyst.html#autotoc_md45", null ] ] ] ] ], [ "Astrobee", "astrobee.html", "astrobee" ], diff --git a/html/volumetric_mapper.html b/html/volumetric_mapper.html index 012ff520..23caa1ae 100644 --- a/html/volumetric_mapper.html +++ b/html/volumetric_mapper.html @@ -90,16 +90,16 @@The wifi mapper subscribes to all the wifi signal strength messages and prcesses them for 3D visualization. Customization os the produced maps is defined in isaac/config/dense_map/wifi_mapper.config
-The trace map is drawn at a certain resolution, within this resolution, every signal received is averaged. The result is depicted using MarkerArray's where the cube size is the map resolution.
Parameters: plot_trace - enable or disable the trace map calculation resolution - resolution of the trace map where whithin it, the measurements will be averaged
-The Wifi Mapper uses Gaussian Process Regression to mapp the ISS regarding the wifi signal strength. It makes use of the libgp library. When a new measure is obtained, the value is recorded in the GP object. When the timer responsible for calculating the wifi interpolation is enables, all the recorded data is processed.
Parameters: plot_map - enable or disable the 3D map interpolation calculation
diff --git a/html/wifi_driver.html b/html/wifi_driver.html index 564f6534..a28be26a 100644 --- a/html/wifi_driver.html +++ b/html/wifi_driver.html @@ -96,16 +96,16 @@For the 'all' option, one needs to give the rights to the node executable, as:
setcap cap_net_admin+ep /path/to/wifi/node
(as a default, the node executable is in the build folder in devel/lib/wifi/wifi_tool)
All the parameters are in astrobee/config/hw/wifi.config General Parameters:
interface_name: to find ou the interface name, one can find out the interface name by running ifconfig in the cmd line. -
Publishes information to the topic hw/wifi/station. It only includes signal strength of the wifi network to which the robot is currently connected to. Scan Station Parameters:
time_scan_station: update rate of the station scan, can be set upt to 50ms. -
Publishes information to the topic hw/wifi/all. Scan All Parameters:
time_scan_station: update rate of the station scan, can be set up to 50ms. time_scan_all: time in between scans, note that even if the time is set to a low limit, information acquisition rate is limited. max_networks: maximum number of networks that will be acquired during 'all' scan.
All data is published as messages on ROS topics using the prefix hw/wifi.
-This package is needed in hardware/wifi node, such that we can scan the wifi networks. If you are missing this package, and have sudo rights, do:
sudo apt-get install libmnl-dev
Otherwise, it can be installed from source as follows:
mkdir $HOME/projects