Skip to content

Commit

Permalink
Automatic update for c2db139.
Browse files Browse the repository at this point in the history
  • Loading branch information
marinagmoreira committed Nov 8, 2023
1 parent 56f8d94 commit 8e2dad0
Show file tree
Hide file tree
Showing 24 changed files with 327 additions and 312 deletions.
10 changes: 5 additions & 5 deletions html/acoustics_camera.html
Original file line number Diff line number Diff line change
Expand Up @@ -90,17 +90,17 @@
<div class="title">Acoustics camera </div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h3><a class="anchor" id="autotoc_md70"></a>
<div class="textblock"><h3><a class="anchor" id="autotoc_md71"></a>
Overview</h3>
<p>This camera simulates a microphone array, or, in other words, a directional microphone. Its readings are assembled into a spherical pattern, consisting of one floating-point measurement for each direction emerging from the microphone center. It is assumed that the microphone array is mounted on the robot and it takes readings as the robot moves around.</p>
<p>For visualization purposes, the microphone measurements are converted to an acoustic "image". Hence, a virtual camera is created centered at the microphone and with a certain pose that is ideally facing the direction where all or most of the interesting sounds are coming from. The reading at a pixel of that camera is the value of the microphone measurement in the direction of the ray going from the microphone (and camera) center through that pixel.</p>
<h3><a class="anchor" id="autotoc_md71"></a>
<h3><a class="anchor" id="autotoc_md72"></a>
Installation</h3>
<p>The acoustics camera depends on the pyroomacoustics package. This package can be installed together with its dependencies in a Python 2.7 environment using the command: </p><pre class="fragment">pip install numpy==1.15.4 scipy==0.18 pillow==6 PyWavelets==0.4.0 \
networkx==1.8 matplotlib==2.0.0 scikit-image==0.14 \
pyroomacoustics==0.3.1
</pre><p>It would normally install itself in: </p><pre class="fragment">$HOME/.local/lib/python2.7/site-packages/pyroomacoustics
</pre><h3><a class="anchor" id="autotoc_md72"></a>
</pre><h3><a class="anchor" id="autotoc_md73"></a>
Running the acoustics camera</h3>
<p>The acoustics camera ROS node can be run as part of the simulator. For that, first set up the environment along the lines of: </p><pre class="fragment">export ASTROBEE_SOURCE_PATH=$HOME/astrobee/src
export ASTROBEE_BUILD_PATH=$HOME/astrobee
Expand All @@ -115,15 +115,15 @@ <h3><a class="anchor" id="autotoc_md71"></a>
roslaunch acoustics_cam acoustics_cam.launch output:=screen
</pre><p>The acoustics camera can be run without ROS as: </p><pre class="fragment">$ISAAC_WS/src/astrobee/simulation/acoustics_cam/nodes/acoustics_cam debug_mode
</pre><p>In that case it assumes that the robot pose is the value set in the field "debug_robot_pose" in acoustics_cam.json (see below). In this mode it will only create a plot of the acoustics cam image. The sources of sounds will be represented as crosses in this plot, and the camera (microphone) position will be shown as a star.</p>
<h3><a class="anchor" id="autotoc_md73"></a>
<h3><a class="anchor" id="autotoc_md74"></a>
ROS communication</h3>
<p>The acoustics camera subscribes to </p><pre class="fragment">/loc/truth/pose
</pre><p>to get the robot pose. It publishes its image, camera pose, and camera intrinsics on topics: </p><pre class="fragment">/hw/cam_acoustics
/sim/acoustics_cam/pose
/sim/acoustics_cam/info
</pre><p>By default, the camera takes pictures as often as it can (see the configuration below), which is rarely, in fact, as it is slow. It listens however to the topic: </p><pre class="fragment">/comm/dds/command
</pre><p>for guest science commands that may tell it to take a single picture at a specific time, or to take pictures continuously. Such a command must use the app name "gov.nasa.arc.irg.astrobee.acoustics_cam_image" (which is the "s" field in the first command argument) for it to be processed.</p>
<h3><a class="anchor" id="autotoc_md74"></a>
<h3><a class="anchor" id="autotoc_md75"></a>
Configuration</h3>
<p>The behavior of this camera is described in: </p><pre class="fragment"> $ISAAC_WS/src/astrobee/simulation/acoustics_cam/acoustics_cam.json
</pre><p>It has the following entries:</p>
Expand Down
16 changes: 8 additions & 8 deletions html/analyst.html
Original file line number Diff line number Diff line change
Expand Up @@ -90,34 +90,34 @@
<div class="title">Analyst Notebook </div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md37"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md38"></a>
Starting the Analyst Notebook</h1>
<p><b>The jupyter notebooks will be able to access data that is in the <code>$HOME/data</code> and <code>$HOME/data/bags</code>, therefore, make sure all the relevant bag files are there</b></p>
<p>For the Analyst notebook to be functional, it needs to start side-by-side with the database and the IUI (ISAAC user interface). To do so, the recommended method is to use the remote docker images, as: </p><pre class="fragment">$ISAAC_SRC/scripts/docker/run.sh --analyst --no-sim --remote
</pre><p>The ISAAC UI is hosted in: <a href="http://localhost:8080">http://localhost:8080</a> The ArangoDB database is hosted in: <a href="http://localhost:8529">http://localhost:8529</a> The Analyst Notebook is hosted in: <a href="http://localhost:8888/lab?token=isaac">http://localhost:8888/lab?token=isaac</a></p>
<h1><a class="anchor" id="autotoc_md38"></a>
<h1><a class="anchor" id="autotoc_md39"></a>
Intro Tutorials</h1>
<p>Please follow all the tutorial to familiarize yourself with the available functions and to detect if something is not working properly.</p>
<h2><a class="anchor" id="autotoc_md39"></a>
<h2><a class="anchor" id="autotoc_md40"></a>
1) Import Bagfile data to database (optional if using remote database)</h2>
<p>Open the tutorial <a href="http://localhost:8888/lab/tree/1_import_bagfiles.ipynb">here</a>.</p>
<p>This tutorial covers how to upload bag files to a local database. Be aware that uploading large bag files might take a long time. If possible select only the time intervals/topic names that are required for analysis to speed up the process.</p>
<h2><a class="anchor" id="autotoc_md40"></a>
<h2><a class="anchor" id="autotoc_md41"></a>
2) Read data from the Database</h2>
<p>Open the tutorial <a href="http://localhost:8888/lab/tree/2_read_database.ipynb">here</a>.</p>
<p>This tutorial covers how to display data uploaded to the database. It contains some examples of the most common data type / topics. You can filter the data that gets collected from the database using queries.</p>
<h2><a class="anchor" id="autotoc_md41"></a>
<h2><a class="anchor" id="autotoc_md42"></a>
3) Export results to ISAAC user interface</h2>
<p>Open the tutorial <a href="http://localhost:8888/lab/tree/3_export_result_to_iui.ipynb">here</a>.</p>
<p>This tutorial covers the available methods to visualize data in the ISAAC user interface (IUI).</p>
<p>Open the IUI 3D viewer <a href="http://localhost:8080">here</a>.</p>
<h1><a class="anchor" id="autotoc_md42"></a>
<h1><a class="anchor" id="autotoc_md43"></a>
Case study Tutorials</h1>
<h2><a class="anchor" id="autotoc_md43"></a>
<h2><a class="anchor" id="autotoc_md44"></a>
Collecting simulation data to train a CNN + validate with ISS data</h2>
<p>Open the tutorial \href<a href="http://localhost:8888/lab/tree/build_CNN_with_pytorch.ipynb">here</a>.</p>
<p>Here, we use simulation tools to automatically build a train and test dataset. The simulation dataset builder uses arguments as target position model positions and gaussian noise to build. Using the simulated data, we use pytorch to train the classifier of a previously trained CNN. We optimize the CNN using the train dataset, and use the test dataset to decide which iteration of the optimization to keep. With the trained CNN we can run new colledted data through it, namely real image captured data.</p>
<h2><a class="anchor" id="autotoc_md44"></a>
<h2><a class="anchor" id="autotoc_md45"></a>
Building a volumetric map of WiFi signal intensity + analyse model + visualize data</h2>
<p>Open the tutorial </p>
</div></div><!-- contents -->
Expand Down
14 changes: 7 additions & 7 deletions html/ano.html
Original file line number Diff line number Diff line change
Expand Up @@ -90,31 +90,31 @@
<div class="title">Anomaly Detector </div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md48"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md49"></a>
Image Anomaly Detector</h1>
<h2><a class="anchor" id="autotoc_md49"></a>
<h2><a class="anchor" id="autotoc_md50"></a>
Overview</h2>
<p>The Image anomaly detector contains a set of tools to analyse incoming images, using Convolutional Neural Networks, CNN's. To build, train and test the CNN's we use PyTorch.</p>
<h2><a class="anchor" id="autotoc_md50"></a>
<h2><a class="anchor" id="autotoc_md51"></a>
TorchLib</h2>
<p>This package is needed in the anomaly/img_analysis node, such that we can analyse the image, looking for anomalies. The first step is to download the LibTorch ZIP archive, the link might change, best to go to <a href="https://pytorch.org/">https://pytorch.org/</a> and select Linux-&gt;LibTorch-&gt;C++/Java</p>
<p>Important!: The link is the one labeled '(cxx11 ABI)'. If you select the '(Pre-cxx11 ABI)', it will break ROS: </p><pre class="fragment">wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.5.0%2Bcpu.zip
</pre><p>It is advised to unzip the package into a general directory as '/usr/include' </p><pre class="fragment">unzip libtorch-shared-with-deps-latest.zip
</pre><p>To link the path, add this to your '$HOME/.bashrc' file: </p><pre class="fragment">export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:/path/to/libtorch/share/cmake/Torch
</pre><h2><a class="anchor" id="autotoc_md51"></a>
</pre><h2><a class="anchor" id="autotoc_md52"></a>
Define and train the CNN</h2>
<p>The python code containing the CNN definition and training is in resources/vent_cnn.py</p>
<p>Parameters: data_dir - path to the dataset. The dataset should have the correct structure for data import. Should be the same as 'path_dataset' in the Get Training data arguments. classes - specify the image classes, each class should be a folder name in the test and train folder, default classes is ['free', 'obstacle', 'unknown']. Free meas that it detected a free vent, obstacle means that the vent contains an obstacle, unknown means that the vent was not detected. num_epochs - number of epochs to train, default 30 model_name - saved model name, default "model_cnn.pt" trace_model_name - saved traced model name, default "traced_model_cnn.pt"</p>
<h2><a class="anchor" id="autotoc_md52"></a>
<h2><a class="anchor" id="autotoc_md53"></a>
Get training data</h2>
<p>To get training data, a tool is available which will read the poses from a vents file and others file. The tool will change the robot's pose and take pictures automatically. For the should be activated when the simulation is spawned like so (should be spawned in an undocked position such that the dock simulation does not interfere with the manual set of the pose): </p><pre class="fragment">roslaunch isaac sim.launch pose:="10.5 -9 5 0 0 0 1"
</pre><p>To run the too: </p><pre class="fragment">rosrun img_analysis get_train_data -path_dataset $PATH_DATASET -vent_poses $VENT_POSES -other_poses $OTHER_POSES [OPTIONS]
</pre><p>Arguments: path_dataset - Path to where to save the datasets, mandatory to define. vent_poses - .txt file containing the vent poses other_poses - .txt file containing the other non-vent poses robot_dist - Robot's distance to vent, standard is 1m train_pics_per_vent - Number of pictures taken per vent/other for train data test_pics_per_vent - Number of pictures taken per vent/other for test data</p>
<h2><a class="anchor" id="autotoc_md53"></a>
<h2><a class="anchor" id="autotoc_md54"></a>
Test single picture</h2>
<p>There is a script, analyse_img.py, in the resources/ folder, which takes as argument the path of a picture taken with the sci_cam, processing it and outputing the classification result. This algorithm is useful to make sure that the C++ API for Pytorch is working properly.</p>
<p>Parameters: image - path of image to analyse</p>
<h1><a class="anchor" id="autotoc_md54"></a>
<h1><a class="anchor" id="autotoc_md55"></a>
Anomaly Detectors</h1>
<p><a class="el" href="signal_anomaly.html">Signal</a> <a class="el" href="semantic_anomaly.html">Semantic</a> image_anomaly <a class="el" href="volumetric_anomaly.html">Volumetric</a> <a class="el" href="gmm.html">GMM Change Detection</a> </p>
</div></div><!-- contents -->
Expand Down
20 changes: 10 additions & 10 deletions html/ano.js
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
var ano =
[
[ "Image Anomaly Detector", "ano.html#autotoc_md48", [
[ "Overview", "ano.html#autotoc_md49", null ],
[ "TorchLib", "ano.html#autotoc_md50", null ],
[ "Define and train the CNN", "ano.html#autotoc_md51", null ],
[ "Get training data", "ano.html#autotoc_md52", null ],
[ "Test single picture", "ano.html#autotoc_md53", null ]
[ "Image Anomaly Detector", "ano.html#autotoc_md49", [
[ "Overview", "ano.html#autotoc_md50", null ],
[ "TorchLib", "ano.html#autotoc_md51", null ],
[ "Define and train the CNN", "ano.html#autotoc_md52", null ],
[ "Get training data", "ano.html#autotoc_md53", null ],
[ "Test single picture", "ano.html#autotoc_md54", null ]
] ],
[ "Anomaly Detectors", "ano.html#autotoc_md54", null ],
[ "Anomaly Detectors", "ano.html#autotoc_md55", null ],
[ "Signal", "signal_anomaly.html", null ],
[ "Semantic", "semantic_anomaly.html", null ],
[ "Volumetric", "volumetric_anomaly.html", null ],
[ "GMM Change Detection", "gmm.html", [
[ "Overview", "gmm.html#autotoc_md45", null ],
[ "Requirements", "gmm.html#autotoc_md46", [
[ "Usage", "gmm.html#autotoc_md47", null ]
[ "Overview", "gmm.html#autotoc_md46", null ],
[ "Requirements", "gmm.html#autotoc_md47", [
[ "Usage", "gmm.html#autotoc_md48", null ]
] ]
] ]
];
2 changes: 1 addition & 1 deletion html/beh.js
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ var beh =
[
[ "Inspection Behavior", "inspection.html", "inspection" ],
[ "Cargo Behavior", "cargo.html", [
[ "Using the cargo tool", "cargo.html#autotoc_md55", null ]
[ "Using the cargo tool", "cargo.html#autotoc_md56", null ]
] ]
];
2 changes: 1 addition & 1 deletion html/cargo.html
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@
</div><!--header-->
<div class="contents">
<div class="textblock"><p>This directory provides the cargo_tool</p>
<h1><a class="anchor" id="autotoc_md55"></a>
<h1><a class="anchor" id="autotoc_md56"></a>
Using the cargo tool</h1>
<p>This tool is used to initiate pickup and drop cargo actions.</p>
<p>To run the tool: </p><pre class="fragment">rosrun cargo cargo_tool -$ACTION [OPTIONS]
Expand Down
16 changes: 8 additions & 8 deletions html/demos_native.html
Original file line number Diff line number Diff line change
Expand Up @@ -91,35 +91,35 @@
</div><!--header-->
<div class="contents">
<div class="textblock"><p>To run demos using docker containers, please see <a class="el" href="docker.html">Docker Install</a>. There you'll find instructions on how to run the containers and available demos.</p>
<h1><a class="anchor" id="autotoc_md21"></a>
<h1><a class="anchor" id="autotoc_md22"></a>
Starting ISAAC FSW</h1>
<pre class="fragment">roslaunch isaac sim.launch dds:=false robot:=sim_pub rviz:=true
</pre><h1><a class="anchor" id="autotoc_md22"></a>
</pre><h1><a class="anchor" id="autotoc_md23"></a>
Native Demos</h1>
<h2><a class="anchor" id="autotoc_md23"></a>
<h2><a class="anchor" id="autotoc_md24"></a>
Inspection Demos</h2>
<p>The inspection node facilitates the robot to take inspect its surroundings, there are multiple modes to do so. If the robot is not already undocked, it will do so when the inspection command is executed. There are many costuymization options available to the inspection tool, so please check the help output with: </p><pre class="fragment">rosrun inspection inspection_tool -help
</pre><h3><a class="anchor" id="autotoc_md24"></a>
</pre><h3><a class="anchor" id="autotoc_md25"></a>
Anomaly</h3>
<p>Used to take a close-up picture of an area and analyses it with the image anomaly detection node: </p><pre class="fragment">rosrun inspection inspection_tool -anomaly
</pre><p>The robot will inspect the target defined in astrobee/behaviors/inspection/resources/inspection_iss.txt by default, which is a vent on entry of the JEM, bay1. The robot will generate the survey, go to the inspection point and take a picture with the sci camera. The incoming picture will be analysed by the image anomaly detector. In this case it will report back whether the analysed vent is free or obstructed. Note: if the image anomaly detector was not launched with the fsw, then it will only take the picture and skip the analysis.</p>
<p>Options include: target_distance (desired distance to target); target_size_x (target size x - width); target_size_y (target size y - height)</p>
<h3><a class="anchor" id="autotoc_md25"></a>
<h3><a class="anchor" id="autotoc_md26"></a>
Geometry</h3>
<p>Used to create a geometric model of an area (3D model with texture). Takes pictures at all the locations specified in the survey plan. </p><pre class="fragment">rosrun inspection inspection_tool -geometry
</pre><p>The robot will inspect the target defined in astrobee/behaviors/inspection/resources/geometry_iss.txt by default, which corresponds to the bay 5 in the JEM. The robot will go to all locations, and after stable stationkeep, will take a sci camera image. When the image is confirmed to have been received the robot moves forward to another station.</p>
<p>For instructions on how to analysed the abtained data recorded in a bagfile, go to <a class="el" href="geometric_streaming_mapper.html">Geometry mapper and streaming mapper</a>.</p>
<h3><a class="anchor" id="autotoc_md26"></a>
<h3><a class="anchor" id="autotoc_md27"></a>
Volumetric</h3>
<p>Used to create a volumetric model of a given signal. </p><pre class="fragment">rosrun inspection inspection_tool -volumetric
</pre><p>The robot will inspect the target defined in astrobee/behaviors/inspection/resources/volumetric_iss.txt by default, which corresponds to going around the JEM module. The robot stops at each station and then continue to the next.</p>
<p>To learn more about how to process this data, consult <a class="el" href="volumetric_mapper.html">Volumetric Mapper</a>. Data types that can be scoped though this method are signals such as wifi signal strength and RFID tags.</p>
<h3><a class="anchor" id="autotoc_md27"></a>
<h3><a class="anchor" id="autotoc_md28"></a>
Panorama</h3>
<p>Used to take pictures of a certain location that can be stitched into a panorama. </p><pre class="fragment">rosrun inspection inspection_tool -panorama
</pre><p>The robot will take pictures with the camera centered at the location defined in the survey file in astrobee/behaviors/inspection/resources/panorama_iss.txt. The inspection node generates the survey given the parameters provided or derived from the camera model, therefore the pose specified in the survey file is the panorama center and not each station coordinate. The robot will take pictures at each generated station similarly to the geometry mode.</p>
<p>Options include: h_fov (camera horizontal fov, default -1 uses camera matrix); max_angle (maximum angle (deg) to target); max_distance (maximum distance to target); min_distance (minimum distance to target); overlap (overlap between images); pan_max (maximum pan); pan_min (minimum pan).</p>
<h2><a class="anchor" id="autotoc_md28"></a>
<h2><a class="anchor" id="autotoc_md29"></a>
Cargo Transport</h2>
<p>In simulation, it is possible to perform cargo transfer using Astrobee. To do so you will tave to spawn the cargo at a certain location, and send the commands to pick up and drop the cargo.</p>
<p>To spawn a cargo: </p><pre class="fragment">roslaunch isaac_gazebo spawn_object.launch spawn:=cargo pose:="11.3 -5.6 5.7 -0.707 0 0 0.707" name:=CTB_05_1070
Expand Down
Loading

0 comments on commit 8e2dad0

Please sign in to comment.