Skip to content

Commit

Permalink
update website
Browse files Browse the repository at this point in the history
  • Loading branch information
tzuyuan committed Feb 19, 2024
1 parent 1bb3a0c commit da52381
Show file tree
Hide file tree
Showing 3,376 changed files with 143,200 additions and 39,542 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
48 changes: 36 additions & 12 deletions _pages/home.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,23 @@
---
permalink: /
title: "DRIFT"
excerpt: "Dead Reckoning for Robotics in Field Time"
excerpt: "Dead Reckoning in Field Time"
author_profile: true
redirect_from: /sitemap/
---

<p style="text-align:center" float="middle"><b style="font-size:15pt">D</b>ead <b style="font-size:15pt">R</b>eckoning for Robotics <b style="font-size:15pt">I</b>n <b style="font-size:15pt">F</b>ield <b style="font-size:15pt">T</b>ime: A Symmetry-Preserving Filter Based Mobile Robot State Estimator Library</p>
<p style="text-align:center" float="middle"><b style="font-size:15pt">D</b>ead <b style="font-size:15pt">R</b>eckoning <b style="font-size:15pt">I</b>n <b style="font-size:15pt">F</b>ield <b style="font-size:15pt">T</b>ime: A Symmetry-Preserving Filter Based Mobile Robot State Estimator Library</p>
<h1 id="h.uigj53erdbnu" dir="ltr" class="zfr3Q duRjpb CDt4Ke " style="background-color: transparent; border-bottom: none; border-left: none; border-right: none; border-top: none; margin-bottom: 10.0pt; margin-top: 0.0pt; padding-bottom: 0.0pt; padding-left: 0.0pt; padding-right: 0.0pt; padding-top: 0.0pt; text-align: center;"><span class="C9DxTc " style="font-family: Arial; font-size: 12.0pt; font-variant: normal; font-weight: 700; vertical-align: baseline;">Tzu-Yuan Lin &nbsp; &nbsp; &nbsp; Tingjun Li &nbsp; &nbsp; &nbsp; Jonathan Tong &nbsp; &nbsp; &nbsp; Justin Yu &nbsp; &nbsp; &nbsp; Maani Ghaffari</span></h1>

<p dir="ltr" class="zfr3Q CDt4Ke " style="background-color: transparent; border-bottom: none; border-left: none; border-right: none; border-top: none; margin-bottom: 10.0pt; margin-top: 0.0pt; padding-bottom: 0.0pt; padding-left: 0.0pt; padding-right: 0.0pt; padding-top: 0.0pt; text-align: center;"><span class="C9DxTc " style="font-variant: normal;">University of Michigan, Ann Arbor, MI, USA&nbsp;</span></p>

<p float="middle">
<div>
<video style="border-radius:15px" controls muted autoplay="autoplay" src="./images/placeholder.mp4" controls="controls" width="100%" />
<script>
<!-- <video style="border-radius:15px" controls muted autoplay="autoplay" src="./images/placeholder.mp4" controls="controls" width="100%" /> -->
<!-- <script>
document.getElementById('vid').play();
</script>
</script> -->
<iframe width="560" height="315" src="https://www.youtube.com/embed/73HMagemIng?si=_NK84zxx0eMYDH13" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<!--<iframe style="width:100%" src=" https://www.youtube.com/embed/oVbP-Y8xT_E?autoplay=1" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen="false" id="fitvid0"></iframe>-->
</div>
</p>
Expand All @@ -25,14 +26,14 @@ redirect_from: /sitemap/
<b>Overview</b>
<div>
<p class="small">
Dead Reckoning for Robotics In Field Time (DRIFT) is an open-source C++ software library designed to provide accurate and high-frequency state estimation for a variety of grounded mobile robot architectures, including legged and wheeled platforms. Leveraging symmetry-preserving filters such as Invariant extended Kalman filtering (InEKF), this modular library empowers roboticists and engineers with a robust and adaptable tool to estimate instantaneous local pose and velocity in diverse environments. The software is structured in a modular fashion, allowing users to define their own sensor types, and propagation and correction methods, offering a high degree of customization.
Dead Reckoning In Field Time (DRIFT) is an open-source C++ software library designed to provide accurate and high-frequency state estimation for a variety of mobile robot architectures, including legged, wheeled, and marine platforms. Leveraging symmetry-preserving filters such as Invariant extended Kalman filtering (InEKF), this modular library empowers roboticists and engineers with a robust and adaptable tool to estimate instantaneous local pose and velocity in diverse environments. The software is structured in a modular fashion, allowing users to define their own sensor types, and propagation and correction methods, offering a high degree of customization.

<br>
<br>

As a header-only library, DRIFT aims to make real-time robot localization easy and accessible, promoting seamless integration across numerous platforms. Additionally, we provide support for ROS middleware interface. In the future, we plan to expand support for both Lightweight Communications and Marshalling (LCM) and ROS 2.
</p>
<div align="center"><img src="./images/hi-lvl-flow.png" alt="HighLevelFlow" style="max-width:60%;height:auto"></div>
<div align="center"><img src="./images/flow_chart.jpg" alt="HighLevelFlow" style="max-width:120%;height:auto"></div>
</div>
</div>
</div>
Expand All @@ -44,7 +45,7 @@ As a header-only library, DRIFT aims to make real-time robot localization easy a
<div>
Husky
<p>
<a href="https://clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/" target="_blank"><img src="./images/husky.jpeg" alt="Husky" style="border-radius:10px"></a>
<a href="https://clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/" target="_blank"><img src="./images/husky.png" alt="Husky" style="border-radius:10px"></a>
</p>
<!--<p>
The Husky robot is a wheeled mobile robot platform designed and manufactured by Clearpath Robotics, a Canadian robotics company.
Expand All @@ -53,7 +54,7 @@ As a header-only library, DRIFT aims to make real-time robot localization easy a
<div>
MIT MiniCheetah
<p>
<a href="https://www.naverlabs.com/mini-cheetah" target="_blank"><img src="./images/minicheetah.jpg" alt="MITMiniCheetah" style="border-radius:10px"></a>
<a href="https://www.naverlabs.com/mini-cheetah" target="_blank"><img src="./images/minicheetah_forest.jpg" alt="MITMiniCheetah" style="border-radius:10px"></a>
</p>
<!--<p>
The MIT MiniCheetah is a quadrupedal robot designed and developed by the Massachusetts Institute of Technology's Biomimetic Robotics Laboratory.
Expand All @@ -62,16 +63,25 @@ As a header-only library, DRIFT aims to make real-time robot localization easy a
<div>
Fetch
<p>
<a href="https://fetchrobotics.com/" target="_blank"><img src="./images/fetch.jpeg" alt="fetch" style="border-radius:10px"></a>
<a href="https://fetchrobotics.com/" target="_blank"><img src="./images/fetch.jpg" alt="fetch" style="border-radius:10px"></a>
</p>
<!--<p>
The Unitree Go1 is a quadruped robot designed and manufactured by Unitree Robotics.
</p>-->
</div>
<div>
Cassie Blue
MRZR-D4
<p>
<a href="https://news.engin.umich.edu/2017/09/latest-two-legged-walking-robot-arrives-at-michigan/" target="_blank"><img src="./images/cassieblue.png" alt="cassieblue" style="border-radius:10px"></a>
<a href="https://military.polaris.com/en-us/mrzr/" target="_blank"><img src="./images/MRZR_D4.png" alt="MRZR" style="border-radius:10px"></a>
</p>
<!--<p>
The Unitree Go1 is a quadruped robot designed and manufactured by Unitree Robotics.
</p>-->
</div>
<div>
Girona 500 AUV (Simulation)
<p>
<a href="https://iquarobotics.com/girona-500-auv" target="_blank"><img src="./images/stonefish.png" alt="MRZR" style="border-radius:10px"></a>
</p>
<!--<p>
The Unitree Go1 is a quadruped robot designed and manufactured by Unitree Robotics.
Expand Down Expand Up @@ -114,6 +124,20 @@ As a header-only library, DRIFT aims to make real-time robot localization easy a
<b>Citation</b><br>
<p class="small">
If you use this work or find it helpful, please consider citing: (bibtex)

<br><br>Tzu-Yuan Lin, Tingjun Li, Wenzhe Tong, and Maani Ghaffari. "Proprioceptive Invariant Robot State Estimation." arXiv preprint arXiv:2311.04320 (2023). (Under review for Transaction on Robotics)<br>
</p>
<pre>
<code>
@article{lin2023proprioceptive,
title={Proprioceptive Invariant Robot State Estimation},
author={Lin, Tzu-Yuan and Li, Tingjun and Tong, Wenzhe and Ghaffari, Maani},
journal={arXiv preprint arXiv:2311.04320},
year={2023}
}
</code>
</pre>
<p class="small">
<br><br>Tzu-Yuan Lin, Ray Zhang, Justin Yu, and Maani Ghaffari. "Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events." In Conference on Robot Learning. PMLR, 2021<br>
</p>
<pre>
Expand Down
5 changes: 5 additions & 0 deletions _site/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Contributions are welcome! Please add issues and make pull requests. There are no stupid questions. All ideas are welcome. This is a volunteer project. Be excellent to each other.

Fork from master and go from there. This repository is intended to remain a generic, ready-to-fork template that demonstrates the features of academicpages.

If you make a pull request and change code, please make sure there is a closed issue tagged with 'code change' that has some comment linking to either the single commit (if the change was just one commit) or a diff comparing before/after the change (see [issue 21](https://github.com/academicpages/academicpages.github.io/issues/21) for example). This is so that those who have forked this repo and modified it for their purposes can more easily patch bugs and new features.
16 changes: 16 additions & 0 deletions _site/_pages/404md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
%---
%title: "Page Not Found"
%excerpt: "Page not found. Your pixels are in another canvas."
%sitemap: false
%permalink: /404.html
%---

Sorry, but the page you were trying to view does not exist --- perhaps you can try searching for it below.

<script type="text/javascript">
var GOOG_FIXURL_LANG = 'en';
var GOOG_FIXURL_SITE = '{{ site.url }}'
</script>
<script type="text/javascript"
src="//linkhelp.clients.google.com/tbproxy/lh/wm/fixurl.js">
</script>
127 changes: 127 additions & 0 deletions _site/_pages/Datasetmd.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
%---
%permalink: /dataset/
%title: "Dataset"
%author_profile: true
%---


<p float="middle">
<video autoplay="autoplay" src="../images/web_highres_voxels_quaterspeed.mp4" controls="false" width="100%" />
</p>

Welcome! **CarlaSC** is a semantic scene completion dataset with the aim of increasing scene understanding in dynamic environments. Dynamic environments are challenging for scene understanding because dynamic objects leave behind traces and occlusions in completed scenes. As a result, quantifying performance and supervising training of algorithms from real world data is difficult. Therefore, we propose **CarlaSC**, a synthetic outdoor driving dataset generated from *randomly sampled multi-view geometry*.

## Overview

Our dataset consists of 24 sequences, generated from eight maps with a light traffic, medium traffic, and heavy traffic sequence for each. We obtain data from the [CARLA](https://carla.org/) simulator for its realism, autonomous traffic, and synchronized ground truth. Each sequence consists of three minutes of driving sampled at 10 Hz, for a total of 1800 frames. Each frame contains ground truth data including:

* Observed **point clouds** with **semantic labels** and ego-motion compensated **scene flow** for each point.
* **Pose** and **time** of each observation.
* **Complete semantic scene** represented in Cartesian and Cylindrical coordinates. The scene is obtained from twenty randomly placed LiDAR sensors, placed in new locations for every sequence.
* **Bird's Eye View** image for verification.

## Scan Properties

For every frame in our data set, there is a point cloud with ground truth semantic labels and scene flow vectors, a bird’s eye view image for validation, ground truth pose and time, and ground truth semantically labeled scenes. We offer ground truth scenes in both cylindrical and cartesian coordinates, but primarily focus upon the cartesian system. Scenes are available in two resolutions, one of size 128x128x8 and the other of size 256x256x16. We include not only the region directly ahead of the ego vehicle in our semantically labeled scenes, but the full surroundings of the ego vehicle as we believe the entire scene is important for safe navigation and planning.


The exact dimensions for each scene in cartesian and cylindrical coordinates is shown below.

<p float="middle">
<img src="../images/CarlaSCGrid.png" width="100%" />
</p>

Our multi-view scenes include free space labels and minimal occlusions. Each map divided into a low traffic, medium traffic, and high traffic setting. Low traffic is defined as 25 autonomous pedestrians and vehicles, medium traffic as 50 of each, and high traffic as 100. An example image from our dataset compared to a similar frame in the well-known [Semantic KITTI](http://www.semantic-kitti.org/) dataset is shown below.

<p float="middle">
<img src="../images/HD.png" width="51%" />
<img src="../images/BadKITTIOrig.png" width="45%" />
</p>

## Classes

There are 23 semantic [classes](https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-segmentation-camera) in the CARLA simulator. We remove all unlabeled points, and use class 0 to instead represent free space. We also remove any observations of the ego-vehicle, resulting in a clean dataset. A histogram of the frequency of all classes is shown below.

<p float="middle">
<img src="../images/HistogramAll.png" width="75%" />
</p>

As can be seen, the distribution of classes is very uneven. Some classes are nearly identical to others, and some classes such as sky do not show up at all. Therefore, we also propose a remapping of the classes to aid with training supervised learning algorithms.

<p float="middle">
<img src="../images/ClassRemapping.png" width="65%" />
</p>
<p float="middle">
<img src="../images/HistogramRemapped.png" width="75%" />
</p>



## Format

Our data set is split into two coordinate systems with three splits each. There is a Cartesian and Cylindrical semantic scene completion data set, each with a training, validation, and testing split. Note that the coordinate system is only modified for the *output semantic scene*, while the coordinate system for point clouds and poses is Cartesian in both. An example of the same scene in both coordinate systems is shown below, with a Bird's Eye View camera image for reference. Cylindrical coordinates represent objects near to the ego vehicle in high resolution while further away objects are granular. Cartesian coordinates maintain a consistent resolution throughout the volume.

<p align="center">
<img src="../images/BEV.png" width="30%" />
</p>
<p align="center">
<img src="../images/Cartesian.png" width="45%" />
<img src="../images/Cylindrical.png" width="45%" />
</p>

The file structure of our data is shown below. Formats are similar to that of Semantic KITTI, where semantic labels are stored as a [NumPy](https://numpy.org/) uint32 file with the extension ".label" and other files including point locations, number of points per cell, and scene flow are stored as a [NumPy](https://numpy.org/) float32 file with the ".bin" extension. Files are stored as a six character string indicating the frame number followed by an extension, which may be mapped to an exact time using the "times.txt" file. Note that all files use the ego sensor coordinate frame.

### Updated Dataset with Fine Resolution (May 25, 2022)

To better match with the standards set by SemanticKitti, we have also provided a separate version of the semantic scene completion ground truth in the same voxel resolution as SemanticKitti. Specifically, the voxel grid for each frame is of size 256x256x16 over the same volume as before. The fine resolution download links can be found in the Download page for the Cartesian dataset. Note: The download links only contain the evaluation directory as all others are unaffected by voxel resolution and can be downloaded from the original Cartesian dataset.

<p align="left">
<img src="../images/Folder.png" width="35px" />
<b>Split</b> (Train, Val, and Test)
</p>

<p align="left" style="text-indent: 50px;">
<img src="../images/Folder.png" width="35px" />
<b>Sequence</b>
</p>

<p align="left" style="text-indent: 100px;">
<img src="../images/Folder.png" width="35px" />
<b>Coordinates</b> (cartesian or cylindrical)
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Folder.png" width="35px" />
<b>bev</b> bird's eye view image of each frame
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Folder.png" width="35px" />
<b>evaluation</b> semantic scene completion ground truth
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Folder.png" width="35px" />
<b>labels</b> semantically labeled point cloud for each frame
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Folder.png" width="35px" />
<b>predictions</b> ego-motion compensated scene flow for each frame
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Folder.png" width="35px" />
<b>velodyne</b> raw point cloud without intensity
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Paper.png" width="35px" />
<b>poses.txt</b>
</p>

<p align="left" style="text-indent: 150px;">
<img src="../images/Paper.png" width="35px" />
<b>times.txt</b>
</p>

Loading

0 comments on commit da52381

Please sign in to comment.