diff --git a/UR_arm_camera_calibration-main/README.md b/UR_arm_camera_calibration-main/README.md
deleted file mode 100644
index 2e4cc0c..0000000
--- a/UR_arm_camera_calibration-main/README.md
+++ /dev/null
@@ -1,326 +0,0 @@
-# Multi_Calibration
-This repo combines camera intrinsics, hand-eye and camera-laser extrinsic calibration methods for the blaser sensor and the UR Robot arm
-![Flowchart](images/Calibration_process.png)
-
-
-## 0. Installation
-### 0.1 Environment
-Use Ubuntu 18.04/20.04 with ROS [Melodic](http://wiki.ros.org/melodic/Installation/Ubuntu) / [Noetic](http://wiki.ros.org/noetic/Installation/Ubuntu).
-### 0.2 Dependencies
-1. [blaser_ros Melodic](https://github.com/biorobotics/blaser_mapping/tree/master/blaser_ros) / [blaser_ros Noetic](https://github.com/biorobotics/blaser_mapping/tree/tina/add-ubuntu20-compatibility/blaser_ros)
-2. ximea_ros_cam (Install with the Blaser Dependencies)
-3. [Universal_Robots_ROS_Driver
-](https://github.com/UniversalRobots/Universal_Robots_ROS_Driver)
-4. MoveIt Melodic [Tutorials]](http://docs.ros.org/en/melodic/api/moveit_tutorials/html/index.html)
-/ MoveIt Noetic [Tutorials](https://ros-planning.github.io/moveit_tutorials/)
-5. [apriltag_ros
-](https://github.com/AprilRobotics/apriltag_ros)
-
-
-Build the blaser_ros ws and the blaser_dependencies before proceeding.
-
-Follow instructions in 0.3 to install other dependencies
-
-### 0.3 Build Calibration Workspace
-
-- ximea_ros_cam
-
-```shell
-cd ~
-mkdir -p ~/calibration_ws/src
-cd ~/calibration_ws/src
-git clone https://github.com/wavelab/ximea_ros_cam.git
-```
-
-- camera_model
-
-Download the folder from this [link](https://drive.google.com/drive/folders/1xBam90TTU8bbKEM8wfbRfDEYF7RcXm7B?usp=sharing), and place it in `~/calibration_ws/src`.
-
-- apriltag
-```shell
-cd ~/calibration_ws/src
-git clone https://github.com/AprilRobotics/apriltag.git
-```
-
-- Build the workspace
-
-```shell
-cd ~/calibration_ws
-catkin build
-```
-
-- Install MoveIt and UR Drivers for MoveIt
-
-```shell
-cd ~/calibration_ws/src
-mkdir ~/calibration_ws/src/src
-cd ~/calibration_ws/src/src
-git clone -b boost https://github.com/UniversalRobots/Universal_Robots_Client_Library.git src/Universal_Robots_Client_Library
-git clone https://github.com/UniversalRobots/Universal_Robots_ROS_Driver.git src/Universal_Robots_ROS_Driver
-git clone -b calibration_devel https://github.com/fmauch/universal_robot.git src/fmauch_universal_robot
-sudo apt update -qq
-rosdep update
-rosdep install --from-paths src --ignore-src -y
-cd ~/calibration_ws
-catkin build
-```
-
-- Install this calibration repo
-
-```shell
-cd ~/calibration_ws/src
-git clone https://github.com/biorobotics/UR_arm_camera_calibration.git
-catkin build
-```
-
-catkin build several time (mostly three)
-
-The calibration workspace should contain the following repo in the src:
-* UR_arm_camera_calibration
-* Universal_Robots_ROS_Driver
-* MoveIt
-* ximea_ros_cam
-* camera_model
-* apriltag
-
-### 0.4 Configuring ximea_ros_cam
-In `~/calibration_ws/src/ximea_ros_cam/launch/example_cam.launch` change the value parameter to the serial number of your ximea camera
-
-` `
-
-set the **num_cams_in_bus** to **1**
-
-` `
-
-To change the **exposure_time** and **white_balance_mode** add the following line:
-```shell
-
-
-```
-
-Configure other camera params as needed
-## 1. Camera Intrinsics
-![Flowchart](images/Camera_Intrinsics.png)
-### 1.1 Hardware needed
-* UR robot arm
-* Camera holder ([Example](https://drive.google.com/file/d/12sZWZGFeq9ehgpnKUAZS80BMfkZedMsY/view?usp=sharing))
-
-### 1.2 Calibration process
-
-#### 1.2.0 Configure the Calibration Environment
-
-To do the auto calibration, we need to firstly set the initial joint position for robot arm and the checker board origin position. To achieve that, go to `~/calibration_ws/src/multi_calibration/cfg/trajectory_planner.yaml` and change the **homePositions** and **bedOrigin**.
-
-#### 1.2.1 Image Collection
-Open four terminal:
-For the First one, connect the arm:
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch ur_robot_driver _bringup.launch robot_ip:=[robot ip]
-```
-For the Second one enable Moveit!:
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch ur5e_moveit_config ur5e_moveit_planning_execution.launch
-```
-For the Third one enable Ximea driver:
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch ximea_ros_cam example_cam.launch
-```
-
-For the Last one enable the auto_calibration:
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch multi_calib_ros auto_calibration.launch directory:="[file position]" camera_intrinsic:=1
-```
-
-#### 1.2.2 Calibration Process
-Go to the folder where images collected from last step are saved
-```shell
-source blaser_ws/devel/setup.bash
-rosrun camera_model Calibration -w 7 -h 10 -s 5 -i ./ -v --camera-model mei #use package camera_model to perform calibration.
-```
-The camera model parameters will be saved to **camera_camera_calib.yaml**.
-
-Use help **rosrun camera_model Calibration --help** to see all the parameters.
-
-Be careful about what camera model you use. For a narrow FoV camera without huge distortion, use **pinhole** model should suffice. If you use a large FoV camera with a lot of distortion, use **mei** or **kannala-brandt**. Typically you can try all the camera models and choose the one with the smallest reprojection error.
-
-**Finally, examine the intrinsics parameters and result**
-
-* Is the reprojection error reasonable? (should be at least < 1.0 px)
-* Is fx and fy similar?
-* Is cx and cy reasonable? cx ~ width / 2, cy ~ height / 2
-* Is image height and width correct?
-
-Update the calibration result to `~/calibration_ws/src/multi_calibration/cfg/ximea_80_calib.yaml`
-
-After we finish the **MEI** model, we need to do the camera calibration for **PINHOLE** model.
-
-Then, we do the do the calibration in PINHOLE model:
-
-Launch the four terminals but instead of running auto_calibration in camera_intrinsic mode run it in camera_rect_intrinsic mode
-
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch multi_calib_ros auto_calibration.launch directory:="[file position]" camera_rect_intrinsic:=1
-```
-
-Go to the folder where images collected from last step are saved and run the pinhole calibration
-
-```shell
-source Blaser_ws/devel/setup.bash
-rosrun camera_model Calibration -w 7 -h 10 -s 5 -i ./ -v --camera-model PINHOLE #use package camera_model to perform calibration.
-```
-
-Update the pinhole calibration result (fx fy cx cy) to `rectCameraMatrix` in `~/calibration_ws/src/multi_calibration/cfg/calib_params.yaml`
-
-#### 1.2.3 Result Checking
-To check the calibration result, run all the same four terminals with auto_calibration in camera_rect_intrinsic mode
-
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch multi_calib_ros auto_calibration.launch directory:="[file position]" camera_rect_intrinsic:=1
-```
-
-In another terminal window open rqt.
-
-```shell
-rqt
-```
-
-Go to Plugins -> Visualization -> Image Viewer
-
-![Image Viewer](images/rqt1.png)
-
-In the Image Viewer panel switch to the `blaser_cam/image_rect_node` topic, this would display the undistorted image. All the straight lines in real world should appear as perfectly straight on the image.
-
-![Image Viewer Panel](images/rqt0.png)
-
-## 2. Hand-eye Calibration
-![Flowchart](images/Hand_eye_calibration.png)
-
-### 2.1 Hardware needed
-* UR robot arm
-* Camera holder([Example](https://drive.google.com/file/d/1i7l1ikb1o2ocoi0iMsCJhSHDmBsIzraE/view?usp=sharing))
-* April tag holder([Example](https://drive.google.com/file/d/12sZWZGFeq9ehgpnKUAZS80BMfkZedMsY/view?usp=sharing))
-* [Hand eye calibration ipynb](Hand_eye.ipynb)
-
-### 2.2 Calibration process
-#### 2.2.1 Tag Size Measurement
-The key to fiducial marker pose estimation is a correct estimate of the fiducial marker's actual size. To guarantee the precision of fiducial marker tag size, Photoshop editing and 100% print scale is recommended.
-
-Note: The tag size should not be measured from the outside of the tag. The tag size is defined as the distance between the detection corners, or alternately, the length of the edge between the white border and the black border. The following illustration marks the detection corners with red Xs and the tag size with a red arrow for a tag from the 48h12Custom tag family.
-
-![Tag size example](images/tag_size_example.png)
-
-#### 2.2.2 Calibration Process
-
-- Getting the *camera to Tag* transform:
-
- - Start the ximea camera node
-
-
- ```shell
- cd ~/calibration_ws
- source devel/setup.bash
- roslaunch ximea_ros_cam example_cam.launch
- ```
-
- - In a new window run the auto_calibration.launch in handeye mode
-
- ```shell
- cd ~/calibration_ws
- source devel/setup.bash
- roslaunch multi_calib_ros auto_calibration.launch hand_eye:=1
-
- ```
-Now the **camera to Tag** transform can be viewed by echoing the **tf** rostopic.
-
-- Getting the *end-effector to Tag* transform: Use the CAD of your April tag holder and Camera holder to obtain the transform.
-- As a sanity check for the calculation, the tag position derived from your **end-effector to Tag** transform and from **end-effector-camera-tag** transform chains should be exactly the same.
-
-#### 2.2.3 Calibration Process
-After properly setting up the Apriltag pose estimation pipeline(that includes image undistortion and publishing updated rectified camera matrix), the **camera to Tag** transform should be available to you.
-
-With the measurement of **camera to Tag** and **end-effector to Tag**, input them into
-[Jupyter Notebook for Hand-eye calibration](https://drive.google.com/file/d/1x8It3NmqM_Qm07OM-dieFRudrayFTtfS/view?usp=sharing) and get the **EE to Camera** transform matrix.
-
-#### 2.2.4 Result Checking
-As a sanity check for the calculation, the tag position derived from your **end-effector to Tag** transform and from **end-effector-camera-tag** transform chains should be exactly the same.
-
-## 3. Camera-laser extrinsics
-![Flowchart](images/Laser_Cali.png)
-### 3.1 Hardware needed
-* UR5e robot arm
-* Camera holder([Example](https://drive.google.com/file/d/12sZWZGFeq9ehgpnKUAZS80BMfkZedMsY/view?usp=sharing))
-* **Laser On**
-* Laser calibration image([Example](https://drive.google.com/drive/folders/1YaM0gm-hhtI97vL1o1ijyLIKgfpJjhvY?usp=sharing))
-
-The extrinsics parameters between camera and laser is the 3D position of the
-laser plane in the camera reference frame, defined as $ax + by + cz + d = 0$. In
-order to determine the plane's position, we take sample points from this plane
-and then try to fit a 3D plane to these 3D points. We obtain sample points from
-images of a checkerboard where the laser stripe is projected onto the
-checkerboard. Since the checkerboard defines a 3D plane, we can get a 3D point
-position for each 2D laser point on the image, which is the intersection between
-the checkerboard plane and the line-of-sight ray.
-
-We first need to make sure that the laser stripe detection is working. The laser
-stripe detector basically performs an HSV filter with five parameters hue_min,
-hue_max, sat_min, val_min, and val_ratio, defining a 3D range filter H in \[hue_low,
-hue_high\], S in \[sat_low, 255\], and V in
-\[max(val_low, image_max_val * val_ratio), 255\]. Note that V range is dynamic
-and is set with every image.
-
-When using red laser and you want a hue range containing the hue=180 value,
-set hue_min > hue_max and the program will generate two separate hue ranges:
-one is \[hue_min, 180\] and the other is \[0, hue_max\].
-
-To set these parameters, first use `python scripts/im_saver.py [image_topic]` to
-save a couple of sample images, then use `python scripts/im_hsv_viewer.py
-[image_file]` to determine the HSV value of the laser stripe and set appropriate
-values for the threshold parameters. Load these values in a config file (todo
-give example config and dir), which will be used in the calibration later.
-
-To test these parameters, run `laser_stripe_detector` node with sample images
-and the config file. (**todo turn on lsd visualization**).
-
-In order to collect more image for laser calibration, please go to **cali_ws** and run the following code:
-
-First enable ximea driver:
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch ximea_ros_cam example_cam.launch
-```
-
-Next enable the auto_calibration in the laser_cam mode:
-```shell
-cd ~/calibration_ws
-source devel/setup.bash
-roslaunch multi_calib_ros auto_calibration.launch directory:="[file position]" laser_cam:=1
-```
-
-Run **rqt** in different terminal
-
-```shell
-rqt
-```
-
-Go to Plugins -> Topics -> Message Publisher
-
-![Message Publisher](images/rqt2.png)
-
-In the Message Publisher panel switch to the `/execution_status` topic
-
-![Message Publisher Panel](images/rqt3.png)
-
-Publish at a rate you want and the image will be captured automatically at that rate.
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/LICENSE b/UR_arm_camera_calibration-main/apriltag_ros/LICENSE
deleted file mode 100644
index 09b1751..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/LICENSE
+++ /dev/null
@@ -1,30 +0,0 @@
-BSD 2-Clause License
-
-Copyright (c) 2017, California Institute of Technology.
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-1. Redistributions of source code must retain the above copyright notice,
- this list of conditions and the following disclaimer.
-2. Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
- and/or other materials provided with the distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
-LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGE.
-
-The views and conclusions contained in the software and documentation are
-those of the authors and should not be interpreted as representing official
-policies, either expressed or implied, of the California Institute of
-Technology.
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/README.md b/UR_arm_camera_calibration-main/apriltag_ros/README.md
deleted file mode 100644
index 988140e..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/README.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# apriltag_ros
-
-`apriltag_ros` is a Robot Operating System (ROS) wrapper of the [AprilTag 3 visual fiducial detector](https://april.eecs.umich.edu/software/apriltag.html). For details and tutorials, please see the [ROS wiki](http://wiki.ros.org/apriltag_ros).
-
-`apriltag_ros` depends on the latest release of the [AprilTag library](https://github.com/AprilRobotics/apriltag). Clone it into your catkin workspace before building.
-
-**Authors**: Danylo Malyuta, Wolfgang Merkt
-
-**Maintainers**: [Danylo Malyuta](mailto:danylo.malyuta@gmail.com) ([Autonomous Control Laboratory](https://www.aa.washington.edu/research/acl), University of Washington), [Wolfgang Merkt](https://github.com/wxmerkt)
-
-## Quickstart
-
-Starting with a working ROS installation (Kinetic and Melodic are supported):
-```
-export ROS_DISTRO=melodic # Set this to your distro, e.g. kinetic or melodic
-source /opt/ros/$ROS_DISTRO/setup.bash # Source your ROS distro
-mkdir -p ~/catkin_ws/src # Make a new workspace
-cd ~/catkin_ws/src # Navigate to the source space
-git clone https://github.com/AprilRobotics/apriltag.git # Clone Apriltag library
-git clone https://github.com/AprilRobotics/apriltag_ros.git # Clone Apriltag ROS wrapper
-cd ~/catkin_ws # Navigate to the workspace
-rosdep install --from-paths src --ignore-src -r -y # Install any missing packages
-catkin build # Build all packages in the workspace (catkin_make_isolated will work also)
-```
-See the [ROS wiki](http://wiki.ros.org/apriltag_ros) for details and tutorials.
-
-## Tag Size Definition
-
-For a correct depth estimation (and hence the correct full pose) it is necessary to specify the tag size in config/tags.yaml correctly. In the [Wiki for the AprilTag Library](https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#pose-estimation) the correct interpretation of the term "tag size" is explained. The size is defined by the length of the black/white border between the complete black and white rectangle of any tag type. Note that for apriltag3 marker families this does not in fact correspond to the outside of the marker.
-
-Below is a visualization of the tag size (red arrow) to be specified for the most common tag classes:
-![Tag Size Guide](./apriltag_ros/docs/tag_size_guide.svg)
-
-## Contributing
-
-Pull requests are welcome! Especially for the following areas:
-
-- Publishing of the AprilTag 3 algorithm intermediate images over a ROS image topic (which AprilTag 3 already generates when `tag_debug==1`)
-- Conversion of the bundle calibration script from MATLAB to Python
-- Extend calibration to support calibrating tags that cannot appear simultaneously with the master tag, but do appear simultaneously with other tags which themselves or via a similar relationship appear with the master tag (e.g. a bundle with the geometry of a cube - if the master is on one face, tags on the opposite face cannot currently be calibrated). This is basically "transform chaining" and potentially allows calibration of bundles with arbitrary geometry as long as a transform chain exists from any tag to the master tag
-- Supporting multiple tag family detection (currently all tags have to be of the same family). This means calling the detector once for each family. Because the core AprilTag 2 algorithm is the performance bottleneck, detection of `n` tag families will possibly decrease performance by `1/n` (tbd if this still holds for v3)
-
-## Changelog
-
-- In March 2019, the code was upgraded to AprilTag 3 and as thus the options `refine_pose`, `refine_decode`, and `black_border` were removed.
-
-## Copyright
-
-The source code in `apriltag_ros/` is original code that is the ROS wrapper itself, see the [LICENSE](https://github.com/AprilRobotics/apriltag_ros/blob/526b9455121ae0bb6b4c1c3db813f0fbdf78393c/LICENSE). It is inspired by [apriltags_ros](https://github.com/RIVeR-Lab/apriltags_ros) and provides a superset of its functionalities.
-
-If you use this code, please kindly inform [Danylo Malyuta](mailto:danylo.malyuta@gmail.com) (to maintain a list here of research works that have benefited from the code) and cite:
-
-- D. Malyuta, C. Brommer, D. Hentzen, T. Stastny, R. Siegwart, and R. Brockers, “[Long-duration fully autonomous operation of rotorcraft unmanned aerial systems for remote-sensing data acquisition](https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21898),” Journal of Field Robotics, p. arXiv:1908.06381, Aug. 2019.
-- C. Brommer, D. Malyuta, D. Hentzen, and R. Brockers, “[Long-duration autonomy for small rotorcraft UAS including recharging](https://ieeexplore.ieee.org/document/8594111),” in IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, p. arXiv:1810.05683, oct 2018.
-- J. Wang and E. Olson, "[AprilTag 2: Efficient and robust fiducial detection](http://ieeexplore.ieee.org/document/7759617/)," in ''Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)'', October 2016.
-
-```
-@article{Malyuta2019,
- doi = {10.1002/rob.21898},
- url = {https://doi.org/10.1002/rob.21898},
- pages = {arXiv:1908.06381},
- year = {2019},
- month = aug,
- publisher = {Wiley},
- author = {Danylo Malyuta and Christian Brommer and Daniel Hentzen and Thomas Stastny and Roland Siegwart and Roland Brockers},
- title = {Long-duration fully autonomous operation of rotorcraft unmanned aerial systems for remote-sensing data acquisition},
- journal = {Journal of Field Robotics}
-}
-@inproceedings{Brommer2018,
- doi = {10.1109/iros.2018.8594111},
- url = {https://doi.org/10.1109/iros.2018.8594111},
- pages = {arXiv:1810.05683},
- year = {2018},
- month = {oct},
- publisher = {{IEEE}},
- author = {Christian Brommer and Danylo Malyuta and Daniel Hentzen and Roland Brockers},
- title = {Long-Duration Autonomy for Small Rotorcraft {UAS} Including Recharging},
- booktitle = {{IEEE}/{RSJ} International Conference on Intelligent Robots and Systems}
-}
-@inproceedings{Wang2016,
- author = {Wang, John and Olson, Edwin},
- booktitle = {2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
- doi = {10.1109/IROS.2016.7759617},
- isbn = {978-1-5090-3762-9},
- month = {oct},
- pages = {4193--4198},
- publisher = {IEEE},
- title = {{AprilTag 2: Efficient and robust fiducial detection}},
- year = {2016}
-}
-```
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/CHANGELOG.rst b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/CHANGELOG.rst
deleted file mode 100644
index 54420d4..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/CHANGELOG.rst
+++ /dev/null
@@ -1,43 +0,0 @@
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Changelog for package apriltag_ros
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-3.2.1 (2022-04-05)
-------------------
-* Fixed propagation of apriltag and Eigen headers and libraries (`#124 `_)
-* Drop old C++11 as it breaks with new log4cxx.
-* Contributors: Jochen Sprickerhof, Remo Diethelm, Wolfgang Merkt
-
-3.2.0 (2022-03-10)
-------------------
-* Add transport hint option (`#108 `_)
-* Move to using the apriltag CMake target (`#104 `_)
-* Set the tag's parent frame to the camera optical frame (`#101 `_)
-* Fix bug in K matrix in single_image_client (`#103 `_)
-* Add configurable max_hamming_distance for the AprilTag Detector (`#93 `_)
-* Introduce lazy processing for ContinuousDetector (`#80 `_)
-* Contributors: Akshay Prasad, Amal Nanavati, Christian Rauch, Hongzhuo Liang, Wolfgang Merkt
-
-3.1.2 (2020-07-15)
-------------------
-* Add support for tagCircle21h7, tagCircle49h12 (`#69 `_)
-* Add support for tagCustom48h12 (`#65 `_)
-* Contributors: Anthony Biviano, Kyle Saltmarsh, Wolfgang Merkt, kai wu
-
-3.1.1 (2019-10-07)
-------------------
-* Add support for tagStandard41h12 and tagStandard52h13 (`#63 `_, `#59 `_).
-* Add gray image input support (`#58 `_).
-* Contributors: Alexander Reimann, Moritz Zimmermann, Samuel Bachmann, Wolfgang Merkt
-
-3.1.0 (2019-05-25)
-------------------
-* Prepare for release (3.1) and fix catkin_lint errors
-* Upgrade to AprilTag 3, fix installation, and performance improvements (`#43 `_)
- Upgrade to AprilTag 3, fix installation, and performance improvements
-* Rename package to apriltag_ros
- - Relates to https://github.com/AprilRobotics/apriltag_ros/issues/45
-* Contributors: Wolfgang Merkt
-
-1.0.0 (2018-05-14)
-------------------
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/CMakeLists.txt b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/CMakeLists.txt
deleted file mode 100644
index 8323712..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/CMakeLists.txt
+++ /dev/null
@@ -1,157 +0,0 @@
-cmake_minimum_required(VERSION 3.0.2)
-project(apriltag_ros)
-
-find_package(catkin REQUIRED COMPONENTS
- cmake_modules
- cv_bridge
- geometry_msgs
- image_geometry
- image_transport
- message_generation
- nodelet
- pluginlib
- roscpp
- sensor_msgs
- std_msgs
- tf
-)
-
-find_package(Eigen3 REQUIRED)
-find_package(OpenCV REQUIRED)
-find_package(apriltag REQUIRED)
-
-# Set the build type. Options are:
-# Coverage : w/ debug symbols, w/o optimization, w/ code-coverage
-# Debug : w/ debug symbols, w/o optimization
-# Release : w/o debug symbols, w/ optimization
-# RelWithDebInfo : w/ debug symbols, w/ optimization
-# MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries
-# We default to 'Release' if none was specified
-IF(NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
- MESSAGE(STATUS "Setting build type to 'Release' as none was specified.")
- SET(CMAKE_BUILD_TYPE Release CACHE STRING "Choose the build type" FORCE)
- SET_PROPERTY(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Coverage" "Debug" "Release" "MinSizeRel" "RelWithDebInfo")
-ENDIF()
-
-
-add_compile_options("-O3" "-funsafe-loop-optimizations" "-fsee" "-funroll-loops" "-fno-math-errno" "-funsafe-math-optimizations" "-ffinite-math-only" "-fno-signed-zeros")
-
-# Note: These options have been turned off to allow for binary releases -
-# in local builds, they can be reactivated to achieve higher performance.
-# if("${CMAKE_SYSTEM_PROCESSOR}" MATCHES "x86_64" OR "x86_32")
-# message("enabling msse2 for x86_64 or x86_32 architecture")
-# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=native -msse2 ")
-# endif()
-# if("${CMAKE_SYSTEM_PROCESSOR}" MATCHES "arm")
-# message("enabling -mfpu=neon -mfloat-abi=softfp for ARM architecture")
-# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv7-a -mcpu=cortex-a9 -mfpu=neon -mtune=cortex-a9 -mvectorize-with-neon-quad -ffast-math ")
-# endif()
-
-add_message_files(
- FILES
- AprilTagDetection.msg
- AprilTagDetectionArray.msg
-)
-
-add_service_files(
- FILES
- AnalyzeSingleImage.srv
-)
-
-generate_messages(
- DEPENDENCIES
- geometry_msgs
- sensor_msgs
- std_msgs
-)
-
-# Extract the include directories and libraries from apriltag::apriltag as catkin_package() does not support modern cmake.
-get_target_property(apriltag_INCLUDE_DIRS apriltag::apriltag INTERFACE_INCLUDE_DIRECTORIES)
-get_target_property(apriltag_LIBRARIES apriltag::apriltag INTERFACE_LINK_LIBRARIES)
-
-catkin_package(
- INCLUDE_DIRS
- include
- ${EIGEN3_INCLUDE_DIRS}
- CATKIN_DEPENDS
- cv_bridge
- geometry_msgs
- image_transport
- message_runtime
- nodelet
- pluginlib
- roscpp
- sensor_msgs
- std_msgs
- tf
- DEPENDS
- apriltag
- OpenCV
- LIBRARIES
- ${PROJECT_NAME}_common
- ${PROJECT_NAME}_continuous_detector
- ${PROJECT_NAME}_single_image_detector
-)
-
-###########
-## Build ##
-###########
-
-include_directories(include
- ${catkin_INCLUDE_DIRS}
- ${EIGEN3_INCLUDE_DIRS}
- ${OpenCV_INCLUDE_DIRS}
-)
-
-add_library(${PROJECT_NAME}_common src/common_functions.cpp)
-add_dependencies(${PROJECT_NAME}_common ${PROJECT_NAME}_generate_messages_cpp)
-target_link_libraries(${PROJECT_NAME}_common ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} apriltag::apriltag)
-
-add_library(${PROJECT_NAME}_continuous_detector src/continuous_detector.cpp)
-target_link_libraries(${PROJECT_NAME}_continuous_detector ${PROJECT_NAME}_common ${catkin_LIBRARIES})
-
-add_library(${PROJECT_NAME}_single_image_detector src/single_image_detector.cpp)
-target_link_libraries(${PROJECT_NAME}_single_image_detector ${PROJECT_NAME}_common ${catkin_LIBRARIES})
-
-add_executable(${PROJECT_NAME}_continuous_node src/${PROJECT_NAME}_continuous_node.cpp)
-add_dependencies(${PROJECT_NAME}_continuous_node ${PROJECT_NAME}_generate_messages_cpp)
-target_link_libraries(${PROJECT_NAME}_continuous_node ${PROJECT_NAME}_continuous_detector ${catkin_LIBRARIES})
-
-add_executable(${PROJECT_NAME}_single_image_server_node src/${PROJECT_NAME}_single_image_server_node.cpp)
-add_dependencies(${PROJECT_NAME}_single_image_server_node ${PROJECT_NAME}_generate_messages_cpp)
-target_link_libraries(${PROJECT_NAME}_single_image_server_node ${PROJECT_NAME}_single_image_detector ${catkin_LIBRARIES})
-
-add_executable(${PROJECT_NAME}_single_image_client_node src/${PROJECT_NAME}_single_image_client_node.cpp)
-add_dependencies(${PROJECT_NAME}_single_image_client_node ${PROJECT_NAME}_generate_messages_cpp)
-target_link_libraries(${PROJECT_NAME}_single_image_client_node ${PROJECT_NAME}_common ${catkin_LIBRARIES})
-
-
-#############
-## Install ##
-#############
-
-install(DIRECTORY include/${PROJECT_NAME}/
- DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
- FILES_MATCHING PATTERN "*.h"
-)
-
-install(DIRECTORY config launch
- DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
-)
-install(FILES nodelet_plugins.xml
- DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
-)
-
-install(TARGETS
- ${PROJECT_NAME}_common
- ${PROJECT_NAME}_continuous_detector
- ${PROJECT_NAME}_continuous_node
- ${PROJECT_NAME}_single_image_client_node
- ${PROJECT_NAME}_single_image_detector
- ${PROJECT_NAME}_single_image_server_node
- RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
- ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
- LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
-)
-
-install(PROGRAMS scripts/analyze_image DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/config/settings.yaml b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/config/settings.yaml
deleted file mode 100644
index 77c36aa..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/config/settings.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-# AprilTag 3 code parameters
-# Find descriptions in apriltag/include/apriltag.h:struct apriltag_detector
-# apriltag/include/apriltag.h:struct apriltag_family
-tag_family: 'tagStandard41h12' # options: tagStandard52h13, tagStandard41h12, tag36h11, tag25h9, tag16h5, tagCustom48h12, tagCircle21h7, tagCircle49h12
-tag_threads: 2 # default: 2
-tag_decimate: 1.0 # default: 1.0
-tag_blur: 0.0 # default: 0.0
-tag_refine_edges: 1 # default: 1
-tag_debug: 0 # default: 0
-max_hamming_dist: 2 # default: 2 (Tunable parameter with 2 being a good choice - values >=3 consume large amounts of memory. Choose the largest value possible.)
-# Other parameters
-publish_tf: true # default: false
-transport_hint: "raw" # default: raw, see http://wiki.ros.org/image_transport#Known_Transport_Packages for options
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/config/tags.yaml b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/config/tags.yaml
deleted file mode 100644
index faf7533..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/config/tags.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-# # Definitions of tags to detect
-#
-# ## General remarks
-#
-# - All length in meters
-# - 'size' refers to the length of the shared border between solid black and solid white rectangle.
-# See README.md or https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#pose-estimation for details.
-# - Ellipsis (...) signifies that the previous element can be repeated multiple times.
-#
-# ## Standalone tag definitions
-# ### Remarks
-#
-# - name is optional
-#
-# ### Syntax
-#
-# standalone_tags:
-# [
-# {id: ID, size: SIZE, name: NAME},
-# ...
-# ]
-standalone_tags:
- [
- {id: 5, size: 0.01833}
- ]
-# ## Tag bundle definitions
-# ### Remarks
-#
-# - name is optional
-# - x, y, z have default values of 0 thus they are optional
-# - qw has default value of 1 and qx, qy, qz have default values of 0 thus they are optional
-#
-# ### Syntax
-#
-# tag_bundles:
-# [
-# {
-# name: 'CUSTOM_BUNDLE_NAME',
-# layout:
-# [
-# {id: ID, size: SIZE, x: X_POS, y: Y_POS, z: Z_POS, qw: QUAT_W_VAL, qx: QUAT_X_VAL, qy: QUAT_Y_VAL, qz: QUAT_Z_VAL},
-# ...
-# ]
-# },
-# ...
-# ]
-tag_bundles:
- [
- ]
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/docs/tag_size_guide.svg b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/docs/tag_size_guide.svg
deleted file mode 100644
index 2dbda28..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/docs/tag_size_guide.svg
+++ /dev/null
@@ -1,224 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- image/svg+xml
-
-
-
-
-
-
-
-
- Marker Size definition for apriltag_ros
-
-
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/common_functions.h b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/common_functions.h
deleted file mode 100644
index 4363fa1..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/common_functions.h
+++ /dev/null
@@ -1,243 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- *
- ** common_functions.h *********************************************************
- *
- * Wrapper classes for AprilTag standalone and bundle detection. Main function
- * is TagDetector::detectTags which wraps the call to core AprilTag 2
- * algorithm, apriltag_detector_detect().
- *
- * $Revision: 1.0 $
- * $Date: 2017/12/17 13:23:14 $
- * $Author: dmalyuta $
- *
- * Originator: Danylo Malyuta, JPL
- ******************************************************************************/
-
-#ifndef APRILTAG_ROS_COMMON_FUNCTIONS_H
-#define APRILTAG_ROS_COMMON_FUNCTIONS_H
-
-#include
-#include
-#include
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-#include "apriltag_ros/AprilTagDetection.h"
-#include "apriltag_ros/AprilTagDetectionArray.h"
-
-namespace apriltag_ros
-{
-
-template
-T getAprilTagOption(ros::NodeHandle& pnh,
- const std::string& param_name, const T & default_val)
-{
- T param_val;
- pnh.param(param_name, param_val, default_val);
- return param_val;
-}
-
-// Stores the properties of a tag member of a bundle
-struct TagBundleMember
-{
- int id; // Payload ID
- double size; // [m] Side length
- cv::Matx44d T_oi; // Rigid transform from tag i frame to bundle origin frame
-};
-
-class StandaloneTagDescription
-{
- public:
- StandaloneTagDescription() {};
- StandaloneTagDescription(int id, double size,
- std::string &frame_name) :
- id_(id),
- size_(size),
- frame_name_(frame_name) {}
-
- double size() { return size_; }
- int id() { return id_; }
- std::string& frame_name() { return frame_name_; }
-
- private:
- // Tag description
- int id_;
- double size_;
- std::string frame_name_;
-};
-
-class TagBundleDescription
-{
- public:
- std::map id2idx_; // (id2idx_[]=) mapping
-
- TagBundleDescription(std::string name) :
- name_(name) {}
-
- void addMemberTag(int id, double size, cv::Matx44d T_oi) {
- TagBundleMember member;
- member.id = id;
- member.size = size;
- member.T_oi = T_oi;
- tags_.push_back(member);
- id2idx_[id] = tags_.size()-1;
- }
-
- std::string name () const { return name_; }
- // Get IDs of bundle member tags
- std::vector bundleIds () {
- std::vector ids;
- for (unsigned int i = 0; i < tags_.size(); i++) {
- ids.push_back(tags_[i].id);
- }
- return ids;
- }
- // Get sizes of bundle member tags
- std::vector bundleSizes () {
- std::vector sizes;
- for (unsigned int i = 0; i < tags_.size(); i++) {
- sizes.push_back(tags_[i].size);
- }
- return sizes;
- }
- int memberID (int tagID) { return tags_[id2idx_[tagID]].id; }
- double memberSize (int tagID) { return tags_[id2idx_[tagID]].size; }
- cv::Matx44d memberT_oi (int tagID) { return tags_[id2idx_[tagID]].T_oi; }
-
- private:
- // Bundle description
- std::string name_;
- std::vector tags_;
-};
-
-class TagDetector
-{
- private:
- // Detections sorting
- static int idComparison(const void* first, const void* second);
-
- // Remove detections of tags with the same ID
- void removeDuplicates();
-
- // AprilTag 2 code's attributes
- std::string family_;
- int threads_;
- double decimate_;
- double blur_;
- int refine_edges_;
- int debug_;
- int max_hamming_distance_ = 2; // Tunable, but really, 2 is a good choice. Values of >=3
- // consume prohibitively large amounts of memory, and otherwise
- // you want the largest value possible.
-
- // AprilTag 2 objects
- apriltag_family_t *tf_;
- apriltag_detector_t *td_;
- zarray_t *detections_;
-
- // Other members
- std::map standalone_tag_descriptions_;
- std::vector tag_bundle_descriptions_;
- bool remove_duplicates_;
- bool run_quietly_;
- bool publish_tf_;
- tf::TransformBroadcaster tf_pub_;
-
- public:
-
- TagDetector(ros::NodeHandle pnh);
- ~TagDetector();
-
- // Store standalone and bundle tag descriptions
- std::map parseStandaloneTags(
- XmlRpc::XmlRpcValue& standalone_tag_descriptions);
- std::vector parseTagBundles(
- XmlRpc::XmlRpcValue& tag_bundles);
- double xmlRpcGetDouble(
- XmlRpc::XmlRpcValue& xmlValue, std::string field) const;
- double xmlRpcGetDoubleWithDefault(
- XmlRpc::XmlRpcValue& xmlValue, std::string field,
- double defaultValue) const;
-
- bool findStandaloneTagDescription(
- int id, StandaloneTagDescription*& descriptionContainer,
- bool printWarning = true);
-
- geometry_msgs::PoseWithCovarianceStamped makeTagPose(
- const Eigen::Matrix4d& transform,
- const Eigen::Quaternion rot_quaternion,
- const std_msgs::Header& header);
-
- // Detect tags in an image
- AprilTagDetectionArray detectTags(
- const cv_bridge::CvImagePtr& image,
- const sensor_msgs::CameraInfoConstPtr& camera_info);
-
- // Get the pose of the tag in the camera frame
- // Returns homogeneous transformation matrix [R,t;[0 0 0 1]] which
- // takes a point expressed in the tag frame to the same point
- // expressed in the camera frame. As usual, R is the (passive)
- // rotation from the tag frame to the camera frame and t is the
- // vector from the camera frame origin to the tag frame origin,
- // expressed in the camera frame.
- Eigen::Matrix4d getRelativeTransform(
- std::vector objectPoints,
- std::vector imagePoints,
- double fx, double fy, double cx, double cy) const;
-
- void addImagePoints(apriltag_detection_t *detection,
- std::vector& imagePoints) const;
- void addObjectPoints(double s, cv::Matx44d T_oi,
- std::vector& objectPoints) const;
-
- // Draw the detected tags' outlines and payload values on the image
- void drawDetections(cv_bridge::CvImagePtr image);
-
- bool get_publish_tf() const { return publish_tf_; }
-};
-
-} // namespace apriltag_ros
-
-#endif // APRILTAG_ROS_COMMON_FUNCTIONS_H
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/continuous_detector.h b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/continuous_detector.h
deleted file mode 100644
index 0f6133d..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/continuous_detector.h
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- *
- ** continuous_detector.h ******************************************************
- *
- * Wrapper class of TagDetector class which calls TagDetector::detectTags on
- * each newly arrived image published by a camera.
- *
- * $Revision: 1.0 $
- * $Date: 2017/12/17 13:25:52 $
- * $Author: dmalyuta $
- *
- * Originator: Danylo Malyuta, JPL
- ******************************************************************************/
-
-#ifndef APRILTAG_ROS_CONTINUOUS_DETECTOR_H
-#define APRILTAG_ROS_CONTINUOUS_DETECTOR_H
-
-#include "apriltag_ros/common_functions.h"
-
-#include
-
-#include
-
-namespace apriltag_ros
-{
-
-class ContinuousDetector: public nodelet::Nodelet
-{
- public:
- ContinuousDetector() = default;
- ~ContinuousDetector() = default;
-
- void onInit();
-
- void imageCallback(const sensor_msgs::ImageConstPtr& image_rect,
- const sensor_msgs::CameraInfoConstPtr& camera_info);
-
- private:
- std::shared_ptr tag_detector_;
- bool draw_tag_detections_image_;
- cv_bridge::CvImagePtr cv_image_;
-
- std::shared_ptr it_;
- image_transport::CameraSubscriber camera_image_subscriber_;
- image_transport::Publisher tag_detections_image_publisher_;
- ros::Publisher tag_detections_publisher_;
-};
-
-} // namespace apriltag_ros
-
-#endif // APRILTAG_ROS_CONTINUOUS_DETECTOR_H
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/single_image_detector.h b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/single_image_detector.h
deleted file mode 100644
index e7f219c..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/include/apriltag_ros/single_image_detector.h
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- *
- ** single_image_detector.h ****************************************************
- *
- * Wrapper class of TagDetector class which calls TagDetector::detectTags on a
- * an image stored at a specified load path and stores the output at a specified
- * save path.
- *
- * $Revision: 1.0 $
- * $Date: 2017/12/17 13:33:40 $
- * $Author: dmalyuta $
- *
- * Originator: Danylo Malyuta, JPL
- ******************************************************************************/
-
-#ifndef APRILTAG_ROS_SINGLE_IMAGE_DETECTOR_H
-#define APRILTAG_ROS_SINGLE_IMAGE_DETECTOR_H
-
-#include "apriltag_ros/common_functions.h"
-#include
-
-namespace apriltag_ros
-{
-
-class SingleImageDetector
-{
- private:
- TagDetector tag_detector_;
- ros::ServiceServer single_image_analysis_service_;
-
- ros::Publisher tag_detections_publisher_;
-
- public:
- SingleImageDetector(ros::NodeHandle& nh, ros::NodeHandle& pnh);
-
- // The function which provides the single image analysis service
- bool analyzeImage(apriltag_ros::AnalyzeSingleImage::Request& request,
- apriltag_ros::AnalyzeSingleImage::Response& response);
-};
-
-} // namespace apriltag_ros
-
-#endif // APRILTAG_ROS_SINGLE_IMAGE_DETECTOR_H
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/continuous_detection.launch b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/continuous_detection.launch
deleted file mode 100644
index 8ec0776..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/continuous_detection.launch
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/single_image_client.launch b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/single_image_client.launch
deleted file mode 100644
index 1af41f8..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/single_image_client.launch
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/single_image_server.launch b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/single_image_server.launch
deleted file mode 100644
index d26989c..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/launch/single_image_server.launch
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/msg/AprilTagDetection.msg b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/msg/AprilTagDetection.msg
deleted file mode 100644
index b0c6878..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/msg/AprilTagDetection.msg
+++ /dev/null
@@ -1,14 +0,0 @@
-# Tag ID(s). If a standalone tag, this is a vector of size 1. If a tag bundle,
-# this is a vector containing the IDs of each tag in the bundle.
-int32[] id
-
-# Tag size(s). If a standalone tag, this is a vector of size 1. If a tag bundle,
-# this is a vector containing the sizes of each tag in the bundle, in the same
-# order as the IDs above.
-float64[] size
-
-# Pose in the camera frame, obtained from homography transform. If a standalone
-# tag, the homography is from the four tag corners. If a tag bundle, the
-# homography is from at least the four corners of one member tag and at most the
-# four corners of all member tags.
-geometry_msgs/PoseWithCovarianceStamped pose
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/msg/AprilTagDetectionArray.msg b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/msg/AprilTagDetectionArray.msg
deleted file mode 100644
index 7b45385..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/msg/AprilTagDetectionArray.msg
+++ /dev/null
@@ -1,2 +0,0 @@
-std_msgs/Header header
-AprilTagDetection[] detections
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/nodelet_plugins.xml b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/nodelet_plugins.xml
deleted file mode 100644
index 1405111..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/nodelet_plugins.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-
- Continuous AprilTag 3 detector
-
-
-
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/package.xml b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/package.xml
deleted file mode 100644
index da33a77..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/package.xml
+++ /dev/null
@@ -1,63 +0,0 @@
-
-
- apriltag_ros
- 3.2.1
-
- A ROS wrapper of the AprilTag 3 visual fiducial detection
- algorithm. Provides full access to the core AprilTag 3 algorithm's
- customizations and makes the tag detection image and detected tags' poses
- available over ROS topics (including tf). The core AprilTag 3 algorithm is
- extended to allow the detection of tag bundles and a bundle calibration
- script is provided (bundle detection is more accurate than single tag
- detection). Continuous (camera image stream) and single image detector nodes
- are available.
-
-
- Danylo Malyuta
- Wolfgang Merkt
-
- Danylo Malyuta
-
- BSD
-
- http://www.ros.org/wiki/apriltag_ros
- https://github.com/AprilRobotics/apriltag_ros/issues
- https://github.com/AprilRobotics/apriltag_ros
-
- catkin
-
- apriltag
- geometry_msgs
- image_transport
- image_geometry
- roscpp
- sensor_msgs
- message_generation
- std_msgs
- cv_bridge
- tf
- nodelet
- pluginlib
- eigen
- libopencv-dev
- cmake_modules
-
- tf
- apriltag
- geometry_msgs
- image_transport
- image_geometry
- roscpp
- sensor_msgs
- message_runtime
- std_msgs
- cv_bridge
- nodelet
- pluginlib
- eigen
- libopencv-dev
-
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/scripts/analyze_image b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/scripts/analyze_image
deleted file mode 100644
index 9f48751..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/scripts/analyze_image
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/bin/bash
-#
-# Detect AprilTag in a single image. Usage:
-#
-# ./analyze_image.sh
-#
-
-roslaunch apriltag_ros single_image_client.launch image_load_path:="$1" image_save_path:="$2"
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/scripts/calibrate_bundle.m b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/scripts/calibrate_bundle.m
deleted file mode 100644
index 252d0b0..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/scripts/calibrate_bundle.m
+++ /dev/null
@@ -1,341 +0,0 @@
-% Copyright (c) 2017, California Institute of Technology.
-% All rights reserved.
-%
-% Redistribution and use in source and binary forms, with or without
-% modification, are permitted provided that the following conditions are met:
-%
-% 1. Redistributions of source code must retain the above copyright notice,
-% this list of conditions and the following disclaimer.
-% 2. Redistributions in binary form must reproduce the above copyright notice,
-% this list of conditions and the following disclaimer in the documentation
-% and/or other materials provided with the distribution.
-%
-% THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-% AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-% IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-% ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
-% LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-% CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-% SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-% INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-% CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-% ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-% POSSIBILITY OF SUCH DAMAGE.
-%
-% The views and conclusions contained in the software and documentation are
-% those of the authors and should not be interpreted as representing official
-% policies, either expressed or implied, of the California Institute of
-% Technology.
-%
-%%% calibrate_bundle.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% Script to determine AprilTag bundle relative poses to a "master" tag.
-%
-% Instructions:
-% Record a bagfile of the /tag_detections topic where you steadily
-% point the camera at the AprilTag bundle such that all the bundle's
-% individual tags are visible at least once at some point (the more the
-% better). Run the script, then copy the printed output into the tag.yaml
-% configuration file of apriltag_ros.
-%
-% $Revision: 1.0 $
-% $Date: 2017/12/17 13:37:34 $
-% $Author: dmalyuta $
-%
-% Originator: Danylo Malyuta, JPL
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-%% User inputs
-
-% Relative directory of calibration bagfile
-calibration_file = 'data/calibration.bag';
-
-% Bundle name
-bundle_name = 'my_bundle';
-
-% Master tag's ID
-master_id = 0;
-
-%% Make sure matlab_rosbag is installed
-
-if ~exist('matlab_rosbag-0.5.0-linux64','file')
- websave('matlab_rosbag', ...
- ['https://github.com/bcharrow/matlab_rosbag/releases/' ...
- 'download/v0.5/matlab_rosbag-0.5.0-linux64.zip']);
- unzip('matlab_rosbag');
- delete matlab_rosbag.zip
-end
-addpath('matlab_rosbag-0.5.0-linux64');
-
-%% Load the tag detections bagfile
-
-bag = ros.Bag.load(calibration_file);
-tag_msg = bag.readAll('/tag_detections');
-
-clear tag_data;
-N = numel(tag_msg);
-t0 = getBagTime(tag_msg{1});
-for i = 1:N
- tag_data.t(i) = getBagTime(tag_msg{i})-t0;
- for j = 1:numel(tag_msg{i}.detections)
- detection = tag_msg{i}.detections(j);
- if numel(detection.id)>1
- % Can only use standalone tag detections for calibration!
- % The math allows for bundles too (e.g. bundle composed of
- % bundles) but the code does not, and it's not that useful
- % anyway
- warning_str = 'Skipping tag bundle detection with IDs';
- for k = 1:numel(detection.id)
- warning_str = sprintf('%s %d',warning_str,detection.id(k));
- end
- warning(warning_str);
- continue;
- end
- tag_data.detection(i).id(j) = detection.id;
- tag_data.detection(i).size(j) = detection.size;
- % Tag position with respect to camera frame
- tag_data.detection(i).p(:,j) = detection.pose.pose.pose.position;
- % Tag orientation with respect to camera frame
- % [w;x;y;z] format
- tag_data.detection(i).q(:,j) = ...
- detection.pose.pose.pose.orientation([4,1,2,3]);
- end
-end
-
-%% Compute the measured poses of each tag relative to the master tag
-
-master_size = []; % Size of the master tag
-
-% IDs, sizes, relative positions and orientations of detected tags other
-% than master
-other_ids = [];
-other_sizes = [];
-rel_p = {};
-rel_q = {};
-
-createT = @(p,q) [quat2rotmat(q) p; zeros(1,3) 1];
-invertT = @(T) [T(1:3,1:3)' -T(1:3,1:3)'*T(1:3,4); zeros(1,3) 1];
-
-N = numel(tag_data.detection);
-for i = 1:N
- this = tag_data.detection(i);
-
- mi = find(this.id == master_id);
- if isempty(mi)
- % Master not detected in this detection, so this particular
- % detection is useless
- continue;
- end
-
- % Get the master tag's rigid body transform to the camera frame
- T_cm = createT(this.p(:,mi), this.q(:,mi));
-
- % Get the rigid body transform of every other tag to the camera frame
- for j = 1:numel(this.id)
- % Skip the master, but get its size first
- if isempty(master_size)
- master_size = this.size(j);
- end
- % We already have the rigid body transform from the master tag to
- % the camera frame (T_cm)
- if j == mi
- continue;
- end
-
- % Add ID to detected IDs, if not already there
- id = this.id(j);
- if ~ismember(id, other_ids)
- other_ids(end+1) = id;
- other_sizes(end+1) = this.size(j);
- rel_p{end+1} = [];
- rel_q{end+1} = [];
- end
-
- % Find the index in other_ids corresponding to this tag
- k = find(other_ids == id);
- assert(numel(k) == 1, ...
- 'Tag ID must appear exactly once in the other_ids array');
-
- % Get this tag's rigid body transform to the camera frame
- T_cj = createT(this.p(:,j), this.q(:,j));
-
- % Deduce this tag's rigid body transform to the master tag's frame
- T_mj = invertT(T_cm)*T_cj;
-
- % Save the relative position and orientation of this tag to the
- % master tag
- rel_p{k}(:,end+1) = T_mj(1:3,4);
- rel_q{k}(:,end+1) = rotmat2quat(T_mj);
- end
-end
-
-assert(~isempty(master_size), ...
- sprintf('Master tag with ID %d not found in detections', master_id));
-
-%% Compute (geometric) median position of each tag in master tag frame
-
-geometricMedianCost = @(x,y) sum(sqrt(sum((x-y).^2)));
-
-options = optimset('MaxIter',1000,'MaxFunEvals',1000, ...
- 'Algorithm','interior-point', ...
- 'TolFun', 1e-6, 'TolX', 1e-6);
-
-M = numel(rel_p);
-rel_p_median = nan(3, numel(other_ids));
-for i = 1:M
- % Compute the mean position as the initial value for the minimization
- % problem
- p_0 = mean(rel_p{i},2);
-
- % Compute the geometric median
- [rel_p_median(:,i),~,exitflag] = ...
- fminsearch(@(x) geometricMedianCost(rel_p{i}, x), p_0, options);
- assert(exitflag == 1, ...
- sprintf(['Geometric median minimization did ' ...
- 'not converge (exitflag %d)'], exitflag));
-end
-
-%% Compute the average orientation of each tag with respect to the master tag
-
-rel_q_mean = nan(4, numel(other_ids));
-for i = 1:M
- % Use the method in Landis et al. "Averaging Quaternions", JGCD 2007
-
- % Check the sufficient uniqueness condition
- % TODO this is a computational bottleness - without this check, script
- % returns much faster. Any way to speed up this double-for-loop?
- error_angle{i} = [];
- for j = 1:size(rel_q{i},2)
- q_1 = rel_q{i}(:,j);
- for k = 1:size(rel_q{i},2)
- if j==k
- continue;
- end
- q_2 = rel_q{i}(:,k);
- q_error = quatmult(quatinv(q_1),q_2);
- % Saturate to valid acos range, which prevents imaginary output
- % from acos due to q_error_w being infinitesimaly (to numerical
- % precision) outside of valid [-1,1] range
- q_error_w = min(1,max(q_error(1),-1));
- error_angle{i}(end+1) = 2*acos(q_error_w);
- if 2*acos(q_error_w) >= pi/2
- warning(['Quaternion pair q_%u and q_%u for tag ID %u ' ...
- 'are more than 90 degrees apart!'], ...
- j,k,other_ids(i));
- end
- end
- end
-
- % Average quaternion method
- Q = rel_q{i};
- [V, D] = eig(Q*Q.');
- [~,imax] = max(diag(D)); % Get the largest eigenvalue
- rel_q_mean(:,i) = V(:,imax); % Corresponding eigenvector
- if rel_q_mean(1,i) < 0
- rel_q_mean(:,i) = -rel_q_mean(:,i); % Ensure w positive
- end
-end
-
-%% Print output to paste in tags.yaml
-
-% Head + master tag
-fprintf([ ...
-'tag_bundles:\n' ...
-' [\n' ...
-' {\n' ...
-' name: ''%s'',\n' ...
-' layout:\n' ...
-' [\n'], bundle_name);
-
-% All other tags detected at least once together with master tag
-for i = 0:numel(other_ids)
- newline = ',';
- if i == numel(other_ids)
- newline = '';
- end
- if i == 0
- fprintf(' {id: %d, size: %.2f, x: %.4f, y: %.4f, z: %.4f, qw: %.4f, qx: %.4f, qy: %.4f, qz: %.4f}%s\n', ...
- master_id, master_size, 0, 0, 0, 1, 0, 0, 0, newline);
- else
- fprintf(' {id: %d, size: %.2f, x: %.4f, y: %.4f, z: %.4f, qw: %.4f, qx: %.4f, qy: %.4f, qz: %.4f}%s\n', ...
- other_ids(i), other_sizes(i), rel_p_median(1,i), ...
- rel_p_median(2,i), rel_p_median(3,i), rel_q_mean(1,i), ...
- rel_q_mean(2,i), rel_q_mean(3,i), rel_q_mean(4,i), newline);
- end
-end
-
-% Tail
-fprintf([ ...
-' ]\n'...
-' }\n'...
-' ]\n']);
-
-%% Local functions
-
-function t = getBagTime(bagfile)
- t = double(bagfile.header.stamp.sec)+ ...
- double(bagfile.header.stamp.nsec)/1e9;
-end
-
-function R = quat2rotmat(q)
- % Creates an ACTIVE rotation matrix from a quaternion
- w = q(1);
- x = q(2);
- y = q(3);
- z = q(4);
-
- R = [1-2*(y^2+z^2) 2*(x*y-w*z) 2*(x*z+w*y)
- 2*(x*y+w*z) 1-2*(x^2+z^2) 2*(y*z-w*x)
- 2*(x*z-w*y) 2*(y*z+w*x) 1-2*(x^2+y^2)];
-end
-
-function q = rotmat2quat(R)
- % Adapted for MATLAB from
- % http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/
- tr = R(1,1) + R(2,2) + R(3,3);
-
- if tr > 0
- S = sqrt(tr+1.0) * 2; % S=4*qw
- qw = 0.25 * S;
- qx = (R(3,2) - R(2,3)) / S;
- qy = (R(1,3) - R(3,1)) / S;
- qz = (R(2,1) - R(1,2)) / S;
- elseif (R(1,1) > R(2,2)) && (R(1,1) > R(3,3))
- S = sqrt(1.0 + R(1,1) - R(2,2) - R(3,3)) * 2; % S=4*qx
- qw = (R(3,2) - R(2,3)) / S;
- qx = 0.25 * S;
- qy = (R(1,2) + R(2,1)) / S;
- qz = (R(1,3) + R(3,1)) / S;
- elseif (R(2,2) > R(3,3))
- S = sqrt(1.0 + R(2,2) - R(1,1) - R(3,3)) * 2; % S=4*qy
- qw = (R(1,3) - R(3,1)) / S;
- qx = (R(1,2) + R(2,1)) / S;
- qy = 0.25 * S;
- qz = (R(2,3) + R(3,2)) / S;
- else
- S = sqrt(1.0 + R(3,3) - R(1,1) - R(2,2)) * 2; % S=4*qz
- qw = (R(2,1) - R(1,2)) / S;
- qx = (R(1,3) + R(3,1)) / S;
- qy = (R(2,3) + R(3,2)) / S;
- qz = 0.25 * S;
- end
-
- q = [qw qx qy qz]';
-end
-
-function c = quatmult(a,b)
- % More humanly understandable version:
- % Omegaa = [a((1)) -a((2):(4)).'
- % a((2):(4)) a((1))*eye((3))-[0 -a((4)) a((3)); a((4)) 0 -a((2));-a((3)) a((2)) 0]];
- % c = Omegaa*b;
- % More optimized version:
- c_w = a(1)*b(1) - a(2)*b(2) - a(3)*b(3) - a(4)*b(4);
- c_x = a(1)*b(2) + a(2)*b(1) - a(3)*b(4) + a(4)*b(3);
- c_y = a(1)*b(3) + a(3)*b(1) + a(2)*b(4) - a(4)*b(2);
- c_z = a(1)*b(4) - a(2)*b(3) + a(3)*b(2) + a(4)*b(1);
- c = [c_w; c_x; c_y; c_z];
-end
-
-function qinv = quatinv(q)
- qinv = [q(1); -q(2:4)];
-end
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_continuous_node.cpp b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_continuous_node.cpp
deleted file mode 100644
index add67d2..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_continuous_node.cpp
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- */
-
-#include
-
-#include
-
-int main(int argc, char **argv)
-{
- ros::init(argc, argv, "apriltag_ros");
-
- nodelet::Loader nodelet;
- nodelet::M_string remap(ros::names::getRemappings());
- nodelet::V_string nargv;
-
- nodelet.load(ros::this_node::getName(),
- "apriltag_ros/ContinuousDetector",
- remap, nargv);
-
- ros::spin();
- return 0;
-}
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_single_image_client_node.cpp b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_single_image_client_node.cpp
deleted file mode 100644
index c24d321..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_single_image_client_node.cpp
+++ /dev/null
@@ -1,122 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- */
-
-#include "apriltag_ros/common_functions.h"
-#include
-
-bool getRosParameter (ros::NodeHandle& pnh, std::string name, double& param)
-{
- // Write parameter "name" from ROS Parameter Server into param
- // Return true if successful, false otherwise
- if (pnh.hasParam(name.c_str()))
- {
- pnh.getParam(name.c_str(), param);
- ROS_INFO_STREAM("Set camera " << name.c_str() << " = " << param);
- return true;
- }
- else
- {
- ROS_ERROR_STREAM("Could not find " << name.c_str() << " parameter!");
- return false;
- }
-}
-
-int main(int argc, char **argv)
-{
- ros::init(argc, argv, "apriltag_ros_single_image_client");
-
- ros::NodeHandle nh;
- ros::NodeHandle pnh("~");
-
- ros::ServiceClient client =
- nh.serviceClient(
- "single_image_tag_detection");
-
- // Get the request parameters
- apriltag_ros::AnalyzeSingleImage service;
- service.request.full_path_where_to_get_image =
- apriltag_ros::getAprilTagOption(
- pnh, "image_load_path", "");
- if (service.request.full_path_where_to_get_image.empty())
- {
- return 1;
- }
- service.request.full_path_where_to_save_image =
- apriltag_ros::getAprilTagOption(
- pnh, "image_save_path", "");
- if (service.request.full_path_where_to_save_image.empty())
- {
- return 1;
- }
-
- // Replicate sensors_msgs/CameraInfo message (must be up-to-date with the
- // analyzed image!)
- service.request.camera_info.distortion_model = "plumb_bob";
- double fx, fy, cx, cy;
- if (!getRosParameter(pnh, "fx", fx))
- return 1;
- if (!getRosParameter(pnh, "fy", fy))
- return 1;
- if (!getRosParameter(pnh, "cx", cx))
- return 1;
- if (!getRosParameter(pnh, "cy", cy))
- return 1;
- // Intrinsic camera matrix for the raw (distorted) images
- service.request.camera_info.K[0] = fx;
- service.request.camera_info.K[2] = cx;
- service.request.camera_info.K[4] = fy;
- service.request.camera_info.K[5] = cy;
- service.request.camera_info.K[8] = 1.0;
- service.request.camera_info.P[0] = fx;
- service.request.camera_info.P[2] = cx;
- service.request.camera_info.P[5] = fy;
- service.request.camera_info.P[6] = cy;
- service.request.camera_info.P[10] = 1.0;
-
- // Call the service (detect tags in the image specified by the
- // image_load_path)
- if (client.call(service))
- {
- // use parameter run_quielty=false in order to have the service
- // print out the tag position and orientation
- if (service.response.tag_detections.detections.size() == 0)
- {
- ROS_WARN_STREAM("No detected tags!");
- }
- }
- else
- {
- ROS_ERROR("Failed to call service single_image_tag_detection");
- return 1;
- }
-
- return 0; // happy ending
-}
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_single_image_server_node.cpp b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_single_image_server_node.cpp
deleted file mode 100644
index 09ab903..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/apriltag_ros_single_image_server_node.cpp
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- */
-
-#include "apriltag_ros/common_functions.h"
-#include "apriltag_ros/single_image_detector.h"
-
-int main(int argc, char **argv)
-{
- ros::init(argc, argv, "apriltag_ros_single_image_server");
-
- ros::NodeHandle nh;
- ros::NodeHandle pnh("~");
-
- apriltag_ros::SingleImageDetector continuous_tag_detector(nh, pnh);
-
- ros::spin();
-}
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/common_functions.cpp b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/common_functions.cpp
deleted file mode 100644
index 958c0e2..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/common_functions.cpp
+++ /dev/null
@@ -1,794 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- */
-
-#include "apriltag_ros/common_functions.h"
-#include "image_geometry/pinhole_camera_model.h"
-
-#include "common/homography.h"
-#include "tagStandard52h13.h"
-#include "tagStandard41h12.h"
-#include "tag36h11.h"
-#include "tag25h9.h"
-#include "tag16h5.h"
-#include "tagCustom48h12.h"
-#include "tagCircle21h7.h"
-#include "tagCircle49h12.h"
-
-namespace apriltag_ros
-{
-
-TagDetector::TagDetector(ros::NodeHandle pnh) :
- family_(getAprilTagOption(pnh, "tag_family", "tag36h11")),
- threads_(getAprilTagOption(pnh, "tag_threads", 4)),
- decimate_(getAprilTagOption(pnh, "tag_decimate", 1.0)),
- blur_(getAprilTagOption(pnh, "tag_blur", 0.0)),
- refine_edges_(getAprilTagOption(pnh, "tag_refine_edges", 1)),
- debug_(getAprilTagOption(pnh, "tag_debug", 0)),
- max_hamming_distance_(getAprilTagOption(pnh, "max_hamming_dist", 2)),
- publish_tf_(getAprilTagOption(pnh, "publish_tf", false))
-{
- // Parse standalone tag descriptions specified by user (stored on ROS
- // parameter server)
- XmlRpc::XmlRpcValue standalone_tag_descriptions;
- if(!pnh.getParam("standalone_tags", standalone_tag_descriptions))
- {
- ROS_WARN("No april tags specified");
- }
- else
- {
- try
- {
- standalone_tag_descriptions_ =
- parseStandaloneTags(standalone_tag_descriptions);
- }
- catch(XmlRpc::XmlRpcException e)
- {
- // in case any of the asserts in parseStandaloneTags() fail
- ROS_ERROR_STREAM("Error loading standalone tag descriptions: " <<
- e.getMessage().c_str());
- }
- }
-
- // parse tag bundle descriptions specified by user (stored on ROS parameter
- // server)
- XmlRpc::XmlRpcValue tag_bundle_descriptions;
- if(!pnh.getParam("tag_bundles", tag_bundle_descriptions))
- {
- ROS_WARN("No tag bundles specified");
- }
- else
- {
- try
- {
- tag_bundle_descriptions_ = parseTagBundles(tag_bundle_descriptions);
- }
- catch(XmlRpc::XmlRpcException e)
- {
- // In case any of the asserts in parseStandaloneTags() fail
- ROS_ERROR_STREAM("Error loading tag bundle descriptions: " <<
- e.getMessage().c_str());
- }
- }
-
- // Optionally remove duplicate detections in scene. Defaults to removing
- if(!pnh.getParam("remove_duplicates", remove_duplicates_))
- {
- ROS_WARN("remove_duplicates parameter not provided. Defaulting to true");
- remove_duplicates_ = true;
- }
-
- // Define the tag family whose tags should be searched for in the camera
- // images
- if (family_ == "tagStandard52h13")
- {
- tf_ = tagStandard52h13_create();
- }
- else if (family_ == "tagStandard41h12")
- {
- tf_ = tagStandard41h12_create();
- }
- else if (family_ == "tag36h11")
- {
- tf_ = tag36h11_create();
- }
- else if (family_ == "tag25h9")
- {
- tf_ = tag25h9_create();
- }
- else if (family_ == "tag16h5")
- {
- tf_ = tag16h5_create();
- }
- else if (family_ == "tagCustom48h12")
- {
- tf_ = tagCustom48h12_create();
- }
- else if (family_ == "tagCircle21h7")
- {
- tf_ = tagCircle21h7_create();
- }
- else if (family_ == "tagCircle49h12")
- {
- tf_ = tagCircle49h12_create();
- }
- else
- {
- ROS_WARN("Invalid tag family specified! Aborting");
- exit(1);
- }
-
- // Create the AprilTag 2 detector
- td_ = apriltag_detector_create();
- apriltag_detector_add_family_bits(td_, tf_, max_hamming_distance_);
- td_->quad_decimate = (float)decimate_;
- td_->quad_sigma = (float)blur_;
- td_->nthreads = threads_;
- td_->debug = debug_;
- td_->refine_edges = refine_edges_;
-
- detections_ = NULL;
-}
-
-// destructor
-TagDetector::~TagDetector() {
- // free memory associated with tag detector
- apriltag_detector_destroy(td_);
-
- // Free memory associated with the array of tag detections
- if(detections_)
- {
- apriltag_detections_destroy(detections_);
- }
-
- // free memory associated with tag family
- if (family_ == "tagStandard52h13")
- {
- tagStandard52h13_destroy(tf_);
- }
- else if (family_ == "tagStandard41h12")
- {
- tagStandard41h12_destroy(tf_);
- }
- else if (family_ == "tag36h11")
- {
- tag36h11_destroy(tf_);
- }
- else if (family_ == "tag25h9")
- {
- tag25h9_destroy(tf_);
- }
- else if (family_ == "tag16h5")
- {
- tag16h5_destroy(tf_);
- }
- else if (family_ == "tagCustom48h12")
- {
- tagCustom48h12_destroy(tf_);
- }
- else if (family_ == "tagCircle21h7")
- {
- tagCircle21h7_destroy(tf_);
- }
- else if (family_ == "tagCircle49h12")
- {
- tagCircle49h12_destroy(tf_);
- }
-}
-
-AprilTagDetectionArray TagDetector::detectTags (
- const cv_bridge::CvImagePtr& image,
- const sensor_msgs::CameraInfoConstPtr& camera_info) {
- // Convert image to AprilTag code's format
- cv::Mat gray_image;
- if (image->image.channels() == 1)
- {
- gray_image = image->image;
- }
- else
- {
- cv::cvtColor(image->image, gray_image, CV_BGR2GRAY);
- }
- image_u8_t apriltag_image = { .width = gray_image.cols,
- .height = gray_image.rows,
- .stride = gray_image.cols,
- .buf = gray_image.data
- };
-
- image_geometry::PinholeCameraModel camera_model;
- camera_model.fromCameraInfo(camera_info);
-
- // Get camera intrinsic properties for rectified image.
- double fx = camera_model.fx(); // focal length in camera x-direction [px]
- double fy = camera_model.fy(); // focal length in camera y-direction [px]
- double cx = camera_model.cx(); // optical center x-coordinate [px]
- double cy = camera_model.cy(); // optical center y-coordinate [px]
-
- // Run AprilTag 2 algorithm on the image
- if (detections_)
- {
- apriltag_detections_destroy(detections_);
- detections_ = NULL;
- }
- detections_ = apriltag_detector_detect(td_, &apriltag_image);
-
- // If remove_dulpicates_ is set to true, then duplicate tags are not allowed.
- // Thus any duplicate tag IDs visible in the scene must include at least 1
- // erroneous detection. Remove any tags with duplicate IDs to ensure removal
- // of these erroneous detections
- if (remove_duplicates_)
- {
- removeDuplicates();
- }
-
- // Compute the estimated translation and rotation individually for each
- // detected tag
- AprilTagDetectionArray tag_detection_array;
- std::vector detection_names;
- tag_detection_array.header = image->header;
- std::map > bundleObjectPoints;
- std::map > bundleImagePoints;
- for (int i=0; i < zarray_size(detections_); i++)
- {
- // Get the i-th detected tag
- apriltag_detection_t *detection;
- zarray_get(detections_, i, &detection);
-
- // Bootstrap this for loop to find this tag's description amongst
- // the tag bundles. If found, add its points to the bundle's set of
- // object-image corresponding points (tag corners) for cv::solvePnP.
- // Don't yet run cv::solvePnP on the bundles, though, since we're still in
- // the process of collecting all the object-image corresponding points
- int tagID = detection->id;
- bool is_part_of_bundle = false;
- for (unsigned int j=0; jsize();
-
- // Get estimated tag pose in the camera frame.
- //
- // Note on frames:
- // The raw AprilTag 2 uses the following frames:
- // - camera frame: looking from behind the camera (like a
- // photographer), x is right, y is up and z is towards you
- // (i.e. the back of camera)
- // - tag frame: looking straight at the tag (oriented correctly),
- // x is right, y is down and z is away from you (into the tag).
- // But we want:
- // - camera frame: looking from behind the camera (like a
- // photographer), x is right, y is down and z is straight
- // ahead
- // - tag frame: looking straight at the tag (oriented correctly),
- // x is right, y is up and z is towards you (out of the tag).
- // Using these frames together with cv::solvePnP directly avoids
- // AprilTag 2's frames altogether.
- // TODO solvePnP[Ransac] better?
- std::vector standaloneTagObjectPoints;
- std::vector standaloneTagImagePoints;
- addObjectPoints(tag_size/2, cv::Matx44d::eye(), standaloneTagObjectPoints);
- addImagePoints(detection, standaloneTagImagePoints);
- Eigen::Matrix4d transform = getRelativeTransform(standaloneTagObjectPoints,
- standaloneTagImagePoints,
- fx, fy, cx, cy);
- Eigen::Matrix3d rot = transform.block(0, 0, 3, 3);
- Eigen::Quaternion rot_quaternion(rot);
-
- geometry_msgs::PoseWithCovarianceStamped tag_pose =
- makeTagPose(transform, rot_quaternion, image->header);
-
- // Add the detection to the back of the tag detection array
- AprilTagDetection tag_detection;
- tag_detection.pose = tag_pose;
- tag_detection.id.push_back(detection->id);
- tag_detection.size.push_back(tag_size);
- tag_detection_array.detections.push_back(tag_detection);
- detection_names.push_back(standaloneDescription->frame_name());
- }
-
- //=================================================================
- // Estimate bundle origin pose for each bundle in which at least one
- // member tag was detected
-
- for (unsigned int j=0; j >::iterator it =
- bundleObjectPoints.find(bundleName);
- if (it != bundleObjectPoints.end())
- {
- // Some member tags of this bundle were detected, get the bundle's
- // position!
- TagBundleDescription& bundle = tag_bundle_descriptions_[j];
-
- Eigen::Matrix4d transform =
- getRelativeTransform(bundleObjectPoints[bundleName],
- bundleImagePoints[bundleName], fx, fy, cx, cy);
- Eigen::Matrix3d rot = transform.block(0, 0, 3, 3);
- Eigen::Quaternion rot_quaternion(rot);
-
- geometry_msgs::PoseWithCovarianceStamped bundle_pose =
- makeTagPose(transform, rot_quaternion, image->header);
-
- // Add the detection to the back of the tag detection array
- AprilTagDetection tag_detection;
- tag_detection.pose = bundle_pose;
- tag_detection.id = bundle.bundleIds();
- tag_detection.size = bundle.bundleSizes();
- tag_detection_array.detections.push_back(tag_detection);
- detection_names.push_back(bundle.name());
- }
- }
-
- // If set, publish the transform /tf topic
- if (publish_tf_) {
- for (unsigned int i=0; i tag_transform;
- tf::poseStampedMsgToTF(pose, tag_transform);
- tf_pub_.sendTransform(tf::StampedTransform(tag_transform,
- tag_transform.stamp_,
- image->header.frame_id,
- detection_names[i]));
- }
- }
-
- return tag_detection_array;
-}
-
-int TagDetector::idComparison (const void* first, const void* second)
-{
- int id1 = ((apriltag_detection_t*) first)->id;
- int id2 = ((apriltag_detection_t*) second)->id;
- return (id1 < id2) ? -1 : ((id1 == id2) ? 0 : 1);
-}
-
-void TagDetector::removeDuplicates ()
-{
- zarray_sort(detections_, &idComparison);
- int count = 0;
- bool duplicate_detected = false;
- while (true)
- {
- if (count > zarray_size(detections_)-1)
- {
- // The entire detection set was parsed
- return;
- }
- apriltag_detection_t *detection;
- zarray_get(detections_, count, &detection);
- int id_current = detection->id;
- // Default id_next value of -1 ensures that if the last detection
- // is a duplicated tag ID, it will get removed
- int id_next = -1;
- if (count < zarray_size(detections_)-1)
- {
- zarray_get(detections_, count+1, &detection);
- id_next = detection->id;
- }
- if (id_current == id_next || (id_current != id_next && duplicate_detected))
- {
- duplicate_detected = true;
- // Remove the current tag detection from detections array
- int shuffle = 0;
- zarray_remove_index(detections_, count, shuffle);
- if (id_current != id_next)
- {
- ROS_WARN_STREAM("Pruning tag ID " << id_current << " because it "
- "appears more than once in the image.");
- duplicate_detected = false; // Reset
- }
- continue;
- }
- else
- {
- count++;
- }
- }
-}
-
-void TagDetector::addObjectPoints (
- double s, cv::Matx44d T_oi, std::vector& objectPoints) const
-{
- // Add to object point vector the tag corner coordinates in the bundle frame
- // Going counterclockwise starting from the bottom left corner
- objectPoints.push_back(T_oi.get_minor<3, 4>(0, 0)*cv::Vec4d(-s,-s, 0, 1));
- objectPoints.push_back(T_oi.get_minor<3, 4>(0, 0)*cv::Vec4d( s,-s, 0, 1));
- objectPoints.push_back(T_oi.get_minor<3, 4>(0, 0)*cv::Vec4d( s, s, 0, 1));
- objectPoints.push_back(T_oi.get_minor<3, 4>(0, 0)*cv::Vec4d(-s, s, 0, 1));
-}
-
-void TagDetector::addImagePoints (
- apriltag_detection_t *detection,
- std::vector& imagePoints) const
-{
- // Add to image point vector the tag corners in the image frame
- // Going counterclockwise starting from the bottom left corner
- double tag_x[4] = {-1,1,1,-1};
- double tag_y[4] = {1,1,-1,-1}; // Negated because AprilTag tag local
- // frame has y-axis pointing DOWN
- // while we use the tag local frame
- // with y-axis pointing UP
- for (int i=0; i<4; i++)
- {
- // Homography projection taking tag local frame coordinates to image pixels
- double im_x, im_y;
- homography_project(detection->H, tag_x[i], tag_y[i], &im_x, &im_y);
- imagePoints.push_back(cv::Point2d(im_x, im_y));
- }
-}
-
-Eigen::Matrix4d TagDetector::getRelativeTransform(
- std::vector objectPoints,
- std::vector imagePoints,
- double fx, double fy, double cx, double cy) const
-{
- // perform Perspective-n-Point camera pose estimation using the
- // above 3D-2D point correspondences
- cv::Mat rvec, tvec;
- cv::Matx33d cameraMatrix(fx, 0, cx,
- 0, fy, cy,
- 0, 0, 1);
- cv::Vec4f distCoeffs(0,0,0,0); // distortion coefficients
- // TODO Perhaps something like SOLVEPNP_EPNP would be faster? Would
- // need to first check WHAT is a bottleneck in this code, and only
- // do this if PnP solution is the bottleneck.
- cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
- cv::Matx33d R;
- cv::Rodrigues(rvec, R);
- Eigen::Matrix3d wRo;
- wRo << R(0,0), R(0,1), R(0,2), R(1,0), R(1,1), R(1,2), R(2,0), R(2,1), R(2,2);
-
- Eigen::Matrix4d T; // homogeneous transformation matrix
- T.topLeftCorner(3, 3) = wRo;
- T.col(3).head(3) <<
- tvec.at(0), tvec.at(1), tvec.at(2);
- T.row(3) << 0,0,0,1;
- return T;
-}
-
-geometry_msgs::PoseWithCovarianceStamped TagDetector::makeTagPose(
- const Eigen::Matrix4d& transform,
- const Eigen::Quaternion rot_quaternion,
- const std_msgs::Header& header)
-{
- geometry_msgs::PoseWithCovarianceStamped pose;
- pose.header = header;
- //===== Position and orientation
- pose.pose.pose.position.x = transform(0, 3);
- pose.pose.pose.position.y = transform(1, 3);
- pose.pose.pose.position.z = transform(2, 3);
- pose.pose.pose.orientation.x = rot_quaternion.x();
- pose.pose.pose.orientation.y = rot_quaternion.y();
- pose.pose.pose.orientation.z = rot_quaternion.z();
- pose.pose.pose.orientation.w = rot_quaternion.w();
- return pose;
-}
-
-void TagDetector::drawDetections (cv_bridge::CvImagePtr image)
-{
- for (int i = 0; i < zarray_size(detections_); i++)
- {
- apriltag_detection_t *det;
- zarray_get(detections_, i, &det);
-
- // Check if this ID is present in config/tags.yaml
- // Check if is part of a tag bundle
- int tagID = det->id;
- bool is_part_of_bundle = false;
- for (unsigned int j=0; jimage, cv::Point((int)det->p[0][0], (int)det->p[0][1]),
- cv::Point((int)det->p[1][0], (int)det->p[1][1]),
- cv::Scalar(0, 0xff, 0)); // green
- line(image->image, cv::Point((int)det->p[0][0], (int)det->p[0][1]),
- cv::Point((int)det->p[3][0], (int)det->p[3][1]),
- cv::Scalar(0, 0, 0xff)); // red
- line(image->image, cv::Point((int)det->p[1][0], (int)det->p[1][1]),
- cv::Point((int)det->p[2][0], (int)det->p[2][1]),
- cv::Scalar(0xff, 0, 0)); // blue
- line(image->image, cv::Point((int)det->p[2][0], (int)det->p[2][1]),
- cv::Point((int)det->p[3][0], (int)det->p[3][1]),
- cv::Scalar(0xff, 0, 0)); // blue
-
- // Print tag ID in the middle of the tag
- std::stringstream ss;
- ss << det->id;
- cv::String text = ss.str();
- int fontface = cv::FONT_HERSHEY_SCRIPT_SIMPLEX;
- double fontscale = 0.5;
- int baseline;
- cv::Size textsize = cv::getTextSize(text, fontface,
- fontscale, 2, &baseline);
- cv::putText(image->image, text,
- cv::Point((int)(det->c[0]-textsize.width/2),
- (int)(det->c[1]+textsize.height/2)),
- fontface, fontscale, cv::Scalar(0xff, 0x99, 0), 2);
- }
-}
-
-// Parse standalone tag descriptions
-std::map TagDetector::parseStandaloneTags (
- XmlRpc::XmlRpcValue& standalone_tags)
-{
- // Create map that will be filled by the function and returned in the end
- std::map descriptions;
- // Ensure the type is correct
- ROS_ASSERT(standalone_tags.getType() == XmlRpc::XmlRpcValue::TypeArray);
- // Loop through all tag descriptions
- for (int32_t i = 0; i < standalone_tags.size(); i++)
- {
-
- // i-th tag description
- XmlRpc::XmlRpcValue& tag_description = standalone_tags[i];
-
- // Assert the tag description is a struct
- ROS_ASSERT(tag_description.getType() ==
- XmlRpc::XmlRpcValue::TypeStruct);
- // Assert type of field "id" is an int
- ROS_ASSERT(tag_description["id"].getType() ==
- XmlRpc::XmlRpcValue::TypeInt);
- // Assert type of field "size" is a double
- ROS_ASSERT(tag_description["size"].getType() ==
- XmlRpc::XmlRpcValue::TypeDouble);
-
- int id = (int)tag_description["id"]; // tag id
- // Tag size (square, side length in meters)
- double size = (double)tag_description["size"];
-
- // Custom frame name, if such a field exists for this tag
- std::string frame_name;
- if(tag_description.hasMember("name"))
- {
- // Assert type of field "name" is a string
- ROS_ASSERT(tag_description["name"].getType() ==
- XmlRpc::XmlRpcValue::TypeString);
- frame_name = (std::string)tag_description["name"];
- }
- else
- {
- std::stringstream frame_name_stream;
- frame_name_stream << "tag_" << id;
- frame_name = frame_name_stream.str();
- }
-
- StandaloneTagDescription description(id, size, frame_name);
- ROS_INFO_STREAM("Loaded tag config: " << id << ", size: " <<
- size << ", frame_name: " << frame_name.c_str());
- // Add this tag's description to map of descriptions
- descriptions.insert(std::make_pair(id, description));
- }
-
- return descriptions;
-}
-
-// parse tag bundle descriptions
-std::vector TagDetector::parseTagBundles (
- XmlRpc::XmlRpcValue& tag_bundles)
-{
- std::vector descriptions;
- ROS_ASSERT(tag_bundles.getType() == XmlRpc::XmlRpcValue::TypeArray);
-
- // Loop through all tag bundle descritions
- for (int32_t i=0; isize());
- }
-
- // Get this tag's pose with respect to the bundle origin
- double x = xmlRpcGetDoubleWithDefault(tag, "x", 0.);
- double y = xmlRpcGetDoubleWithDefault(tag, "y", 0.);
- double z = xmlRpcGetDoubleWithDefault(tag, "z", 0.);
- double qw = xmlRpcGetDoubleWithDefault(tag, "qw", 1.);
- double qx = xmlRpcGetDoubleWithDefault(tag, "qx", 0.);
- double qy = xmlRpcGetDoubleWithDefault(tag, "qy", 0.);
- double qz = xmlRpcGetDoubleWithDefault(tag, "qz", 0.);
- Eigen::Quaterniond q_tag(qw, qx, qy, qz);
- q_tag.normalize();
- Eigen::Matrix3d R_oi = q_tag.toRotationMatrix();
-
- // Build the rigid transform from tag_j to the bundle origin
- cv::Matx44d T_mj(R_oi(0,0), R_oi(0,1), R_oi(0,2), x,
- R_oi(1,0), R_oi(1,1), R_oi(1,2), y,
- R_oi(2,0), R_oi(2,1), R_oi(2,2), z,
- 0, 0, 0, 1);
-
- // Register the tag member
- bundle_i.addMemberTag(id, size, T_mj);
- ROS_INFO_STREAM(" " << j << ") id: " << id << ", size: " << size << ", "
- << "p = [" << x << "," << y << "," << z << "], "
- << "q = [" << qw << "," << qx << "," << qy << ","
- << qz << "]");
- }
- descriptions.push_back(bundle_i);
- }
- return descriptions;
-}
-
-double TagDetector::xmlRpcGetDouble (XmlRpc::XmlRpcValue& xmlValue,
- std::string field) const
-{
- ROS_ASSERT((xmlValue[field].getType() == XmlRpc::XmlRpcValue::TypeDouble) ||
- (xmlValue[field].getType() == XmlRpc::XmlRpcValue::TypeInt));
- if (xmlValue[field].getType() == XmlRpc::XmlRpcValue::TypeInt)
- {
- int tmp = xmlValue[field];
- return (double)tmp;
- }
- else
- {
- return xmlValue[field];
- }
-}
-
-double TagDetector::xmlRpcGetDoubleWithDefault (XmlRpc::XmlRpcValue& xmlValue,
- std::string field,
- double defaultValue) const
-{
- if (xmlValue.hasMember(field))
- {
- ROS_ASSERT((xmlValue[field].getType() == XmlRpc::XmlRpcValue::TypeDouble) ||
- (xmlValue[field].getType() == XmlRpc::XmlRpcValue::TypeInt));
- if (xmlValue[field].getType() == XmlRpc::XmlRpcValue::TypeInt)
- {
- int tmp = xmlValue[field];
- return (double)tmp;
- }
- else
- {
- return xmlValue[field];
- }
- }
- else
- {
- return defaultValue;
- }
-}
-
-bool TagDetector::findStandaloneTagDescription (
- int id, StandaloneTagDescription*& descriptionContainer, bool printWarning)
-{
- std::map::iterator description_itr =
- standalone_tag_descriptions_.find(id);
- if (description_itr == standalone_tag_descriptions_.end())
- {
- if (printWarning)
- {
- ROS_WARN_THROTTLE(10.0, "Requested description of standalone tag ID [%d],"
- " but no description was found...",id);
- }
- return false;
- }
- descriptionContainer = &(description_itr->second);
- return true;
-}
-
-} // namespace apriltag_ros
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/continuous_detector.cpp b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/continuous_detector.cpp
deleted file mode 100644
index 9b9a590..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/continuous_detector.cpp
+++ /dev/null
@@ -1,106 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- */
-
-#include "apriltag_ros/continuous_detector.h"
-
-#include
-
-PLUGINLIB_EXPORT_CLASS(apriltag_ros::ContinuousDetector, nodelet::Nodelet);
-
-namespace apriltag_ros
-{
-void ContinuousDetector::onInit ()
-{
- ros::NodeHandle& nh = getNodeHandle();
- ros::NodeHandle& pnh = getPrivateNodeHandle();
-
- tag_detector_ = std::shared_ptr(new TagDetector(pnh));
- draw_tag_detections_image_ = getAprilTagOption(pnh,
- "publish_tag_detections_image", false);
- it_ = std::shared_ptr(
- new image_transport::ImageTransport(nh));
-
- std::string transport_hint;
- pnh.param("transport_hint", transport_hint, "raw");
-
- camera_image_subscriber_ =
- it_->subscribeCamera("image_rect", 1,
- &ContinuousDetector::imageCallback, this,
- image_transport::TransportHints(transport_hint));
- tag_detections_publisher_ =
- nh.advertise("tag_detections", 1);
- if (draw_tag_detections_image_)
- {
- tag_detections_image_publisher_ = it_->advertise("tag_detections_image", 1);
- }
-}
-
-void ContinuousDetector::imageCallback (
- const sensor_msgs::ImageConstPtr& image_rect,
- const sensor_msgs::CameraInfoConstPtr& camera_info)
-{
- // Lazy updates:
- // When there are no subscribers _and_ when tf is not published,
- // skip detection.
- if (tag_detections_publisher_.getNumSubscribers() == 0 &&
- tag_detections_image_publisher_.getNumSubscribers() == 0 &&
- !tag_detector_->get_publish_tf())
- {
- // ROS_INFO_STREAM("No subscribers and no tf publishing, skip processing.");
- return;
- }
-
- // Convert ROS's sensor_msgs::Image to cv_bridge::CvImagePtr in order to run
- // AprilTag 2 on the iamge
- try
- {
- cv_image_ = cv_bridge::toCvCopy(image_rect, image_rect->encoding);
- }
- catch (cv_bridge::Exception& e)
- {
- ROS_ERROR("cv_bridge exception: %s", e.what());
- return;
- }
-
- // Publish detected tags in the image by AprilTag 2
- tag_detections_publisher_.publish(
- tag_detector_->detectTags(cv_image_,camera_info));
-
- // Publish the camera image overlaid by outlines of the detected tags and
- // their payload values
- if (draw_tag_detections_image_)
- {
- tag_detector_->drawDetections(cv_image_);
- tag_detections_image_publisher_.publish(cv_image_->toImageMsg());
- }
-}
-
-} // namespace apriltag_ros
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/single_image_detector.cpp b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/single_image_detector.cpp
deleted file mode 100644
index 6fd49f3..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/src/single_image_detector.cpp
+++ /dev/null
@@ -1,96 +0,0 @@
-/**
- * Copyright (c) 2017, California Institute of Technology.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of the California Institute of
- * Technology.
- */
-
-#include "apriltag_ros/single_image_detector.h"
-
-#include
-#include
-
-namespace apriltag_ros
-{
-
-SingleImageDetector::SingleImageDetector (ros::NodeHandle& nh,
- ros::NodeHandle& pnh) :
- tag_detector_(pnh)
-{
- // Advertise the single image analysis service
- single_image_analysis_service_ =
- nh.advertiseService("single_image_tag_detection",
- &SingleImageDetector::analyzeImage, this);
- tag_detections_publisher_ =
- nh.advertise("tag_detections", 1);
- ROS_INFO_STREAM("Ready to do tag detection on single images");
-}
-
-bool SingleImageDetector::analyzeImage(
- apriltag_ros::AnalyzeSingleImage::Request& request,
- apriltag_ros::AnalyzeSingleImage::Response& response)
-{
-
- ROS_INFO("[ Summoned to analyze image ]");
- ROS_INFO("Image load path: %s",
- request.full_path_where_to_get_image.c_str());
- ROS_INFO("Image save path: %s",
- request.full_path_where_to_save_image.c_str());
-
- // Read the image
- cv::Mat image = cv::imread(request.full_path_where_to_get_image,
- cv::IMREAD_COLOR);
- if (image.data == NULL)
- {
- // Cannot read image
- ROS_ERROR_STREAM("Could not read image " <<
- request.full_path_where_to_get_image.c_str());
- return false;
- }
-
- // Detect tags in the image
- cv_bridge::CvImagePtr loaded_image(new cv_bridge::CvImage(std_msgs::Header(),
- "bgr8", image));
- loaded_image->header.frame_id = "camera";
- response.tag_detections =
- tag_detector_.detectTags(loaded_image,sensor_msgs::CameraInfoConstPtr(
- new sensor_msgs::CameraInfo(request.camera_info)));
-
- // Publish detected tags (AprilTagDetectionArray, basically an array of
- // geometry_msgs/PoseWithCovarianceStamped)
- tag_detections_publisher_.publish(response.tag_detections);
-
- // Save tag detections image
- tag_detector_.drawDetections(loaded_image);
- cv::imwrite(request.full_path_where_to_save_image, loaded_image->image);
-
- ROS_INFO("Done!\n");
-
- return true;
-}
-
-} // namespace apriltag_ros
diff --git a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/srv/AnalyzeSingleImage.srv b/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/srv/AnalyzeSingleImage.srv
deleted file mode 100644
index afea43c..0000000
--- a/UR_arm_camera_calibration-main/apriltag_ros/apriltag_ros/srv/AnalyzeSingleImage.srv
+++ /dev/null
@@ -1,14 +0,0 @@
-# Service which takes in:
-#
-# full_path_to_image : full path to a .jpg image
-#
-# and returns:
-#
-# pose : the pose of the tag in the camera frame
-# tag_detection_image : an image with the detected tag's border highlighted and payload value printed
-
-string full_path_where_to_get_image
-string full_path_where_to_save_image
-sensor_msgs/CameraInfo camera_info
----
-apriltag_ros/AprilTagDetectionArray tag_detections
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/handeye.ipynb b/UR_arm_camera_calibration-main/handeye.ipynb
deleted file mode 100644
index 71350aa..0000000
--- a/UR_arm_camera_calibration-main/handeye.ipynb
+++ /dev/null
@@ -1,297 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "cc0d0983",
- "metadata": {},
- "outputs": [],
- "source": [
- "%matplotlib notebook\n",
- "import numpy as np\n",
- "from scipy.spatial.transform import Rotation as R\n",
- "import matplotlib.pyplot as plt\n",
- "from mpl_toolkits.mplot3d import axes3d \n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "ec9771e6",
- "metadata": {},
- "outputs": [],
- "source": [
- "# if have reference point\n",
- "Reference_position = np.array([0.04147865063328958, -0.0009747523594091701, 0.0788922767828434,1])\n",
- "Reference_rotation = np.array([-0.13826266849719096, -0.1382012230239521, 0.6935305754449016, 0.6933968541735557])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "id": "4d0daa6c",
- "metadata": {},
- "outputs": [],
- "source": [
- "# if any old reference point\n",
- "old_1_position = np.array([-0.0105743, 0.00220382, -0.06984422])\n",
- "old_1_rotation = np.array([-0.69427804, 0.70323633, 0.11240994, -0.10392633])\n",
- "old_2_position = np.array([-0.00315727, 0.00339703, -0.03854263])\n",
- "old_2_rotation = np.array([-0.69215282, 0.69842839, 0.13571394, -0.12126001])\n",
- "old_3_position = np.array([-0.04407151, 0.0005267, 0.04110016,1])\n",
- "old_3_rotation = np.array([-0.13571394, 0.12126001, -0.69215282, 0.69842839])\n",
- "old_4_position = np.array([0.003423965279458376,0.012047656580157355,0.09273353744425332])\n",
- "old_4_rotation = np.array([-0.6912630690986508,0.6996620518515161,0.13597536997373824,0.11890786878817365])\n",
- "old_5_position = np.array([0.002147196318639265,0.010338427618018672,0.09173810544745292])\n",
- "old_5_rotation = np.array([-0.6899769594600504,0.698538859336977,0.1419315772005129,0.12582004929196722])\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "9fe9eea4",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Matrix 1 (From camera to Tag or Arm_second_base to set position) calculation\n",
- "# M_1 = np.array([])\n",
- "M1_x = 0.0006023064322632371\n",
- "M1_y = 0.010909109872314858\n",
- "M1_z = 0.08713564910079528\n",
- "M1_R_x = 0.6961242419610357\n",
- "M1_R_y = -0.695215316078465\n",
- "M1_R_z = -0.12953962064348395\n",
- "M1_R_w = -0.1237181907709441\n",
- "\n",
- "M_1_position = np.array([M1_x,M1_y,M1_z,1])\n",
- "M_1_rotation = np.array([M1_R_x,M1_R_y,M1_R_z,M1_R_w])\n",
- "R_M1 = R.from_quat(M_1_rotation) \n",
- "M_1 = np.zeros([4,4])\n",
- "M_1[0:3,0:3] = R_M1.as_matrix()\n",
- "M_1[:,3] = M_1_position.T\n",
- "M_1_inv = np.linalg.inv(M_1)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "5ad9b971",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Matrix 2(From EE to Tag or Arm_refer_base to set position) calculation\n",
- "M_2 = np.array([[-1.,0.,0.,0.],\n",
- " [0., 1. , 0. , 0.],\n",
- " [0.,0.,-1.,0.126],\n",
- " [0.,0.,0.,1.]])\n",
- "# M2_x = \n",
- "# M2_y = \n",
- "# M2_z =\n",
- "# M2_R_x = \n",
- "# M2_R_y = \n",
- "# M2_R_z =\n",
- "# M2_R_w =\n",
- "# M_2_position = np.array([M2_x,M2_y,M2_z,1])\n",
- "# M_2_rotation = np.array([M2_R_x,M2_R_y,M2_R_z,M2_R_w])\n",
- "# R_M2 = R.from_quat(M_2_rotation) \n",
- "# M_2 = np.zeros([4,4])\n",
- "# M_2[0:3,0:3] = R_M2.as_matrix()\n",
- "# M_2[:,3] = M_2_position.T"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "id": "19b107d7",
- "metadata": {},
- "outputs": [],
- "source": [
- "#Result calculate\n",
- "Result = M_2 @ M_1_inv\n",
- "Result_position = np.array([Result[0,3],Result[1,3],Result[2,3]])\n",
- "Result_rotation = R.from_matrix(Result [0:3,0:3])\n",
- "Result_rotation = Result_rotation.as_quat()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "id": "39f7833f",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "input Matrix 1(From camera to Tag or Arm_second(right)_base to set position) :\n",
- "\n",
- "[[-0.00289804 -0.9996473 -0.0263985 0.00308224]\n",
- " [-0.92945203 -0.00704666 0.36887569 0.01443166]\n",
- " [-0.3689316 0.02560516 -0.92910379 0.07736921]\n",
- " [ 0. 0. 0. 1. ]]\n",
- "\n",
- "input Matrix 2(From EE to Tag or Arm_refer_base to set position) :\n",
- "\n",
- "[[-1. 0. 0. 0. ]\n",
- " [ 0. 1. 0. 0. ]\n",
- " [ 0. 0. -1. 0.126]\n",
- " [ 0. 0. 0. 1. ]]\n",
- "\n",
- " Output Matrix(From EE to Camera or Arm_refer_base to Arm_second(right)_base) :\n",
- "\n",
- "[[ 0.00289804 0.92945203 0.3689316 -0.04196641]\n",
- " [-0.9996473 -0.00704666 0.02560516 0.0012018 ]\n",
- " [ 0.0263985 -0.36887569 0.92910379 0.05935809]\n",
- " [ 0. 0. 0. 1. ]]\n",
- "\n",
- " Position Matrix(X,Y,Z) :\n",
- "\n",
- "[-0.04196641 0.0012018 0.05935809]\n",
- "\n",
- " Rotation Matrix(X,Y,Z,W) :\n",
- "\n",
- "[ 0.14216268 -0.1234418 0.69520724 -0.69371377]\n"
- ]
- }
- ],
- "source": [
- "# Output\n",
- "\n",
- "print(\"\\ninput Matrix 1(From camera to Tag or Arm_second(right)_base to set position) :\\n\" )\n",
- "print(M_1)\n",
- "print(\"\\ninput Matrix 2(From EE to Tag or Arm_refer_base to set position) :\\n\" )\n",
- "print(M_2)\n",
- "\n",
- "print(\"\\n Output Matrix(From EE to Camera or Arm_refer_base to Arm_second(right)_base) :\\n\" )\n",
- "print(Result)\n",
- "\n",
- "print(\"\\n Position Matrix(X,Y,Z) :\\n\" )\n",
- "print(Result_position)\n",
- "print(\"\\n Rotation Matrix(X,Y,Z,W) :\\n\" )\n",
- "print(-Result_rotation)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "3844d951",
- "metadata": {},
- "outputs": [],
- "source": [
- "def set_axes_equal(ax):\n",
- " '''Make axes of 3D plot have equal scale so that spheres appear as spheres,\n",
- " cubes as cubes, etc.. This is one possible solution to Matplotlib's\n",
- " ax.set_aspect('equal') and ax.axis('equal') not working for 3D.\n",
- "\n",
- " Input\n",
- " ax: a matplotlib axis, e.g., as output from plt.gca().\n",
- " '''\n",
- "\n",
- " x_limits = ax.get_xlim3d()\n",
- " y_limits = ax.get_ylim3d()\n",
- " z_limits = ax.get_zlim3d()\n",
- "\n",
- " x_range = abs(x_limits[1] - x_limits[0])\n",
- " x_middle = np.mean(x_limits)\n",
- " y_range = abs(y_limits[1] - y_limits[0])\n",
- " y_middle = np.mean(y_limits)\n",
- " z_range = abs(z_limits[1] - z_limits[0])\n",
- " z_middle = np.mean(z_limits)\n",
- "\n",
- " # The plot bounding box is a sphere in the sense of the infinity\n",
- " # norm, hence I call half the max range the plot radius.\n",
- " plot_radius = 0.5*max([x_range, y_range, z_range])\n",
- "\n",
- " ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius])\n",
- " ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius])\n",
- " ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "id": "e1777430",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "application/javascript": "/* Put everything inside the global mpl namespace */\n/* global mpl */\nwindow.mpl = {};\n\nmpl.get_websocket_type = function () {\n if (typeof WebSocket !== 'undefined') {\n return WebSocket;\n } else if (typeof MozWebSocket !== 'undefined') {\n return MozWebSocket;\n } else {\n alert(\n 'Your browser does not have WebSocket support. ' +\n 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n 'Firefox 4 and 5 are also supported but you ' +\n 'have to enable WebSockets in about:config.'\n );\n }\n};\n\nmpl.figure = function (figure_id, websocket, ondownload, parent_element) {\n this.id = figure_id;\n\n this.ws = websocket;\n\n this.supports_binary = this.ws.binaryType !== undefined;\n\n if (!this.supports_binary) {\n var warnings = document.getElementById('mpl-warnings');\n if (warnings) {\n warnings.style.display = 'block';\n warnings.textContent =\n 'This browser does not support binary websocket messages. ' +\n 'Performance may be slow.';\n }\n }\n\n this.imageObj = new Image();\n\n this.context = undefined;\n this.message = undefined;\n this.canvas = undefined;\n this.rubberband_canvas = undefined;\n this.rubberband_context = undefined;\n this.format_dropdown = undefined;\n\n this.image_mode = 'full';\n\n this.root = document.createElement('div');\n this.root.setAttribute('style', 'display: inline-block');\n this._root_extra_style(this.root);\n\n parent_element.appendChild(this.root);\n\n this._init_header(this);\n this._init_canvas(this);\n this._init_toolbar(this);\n\n var fig = this;\n\n this.waiting = false;\n\n this.ws.onopen = function () {\n fig.send_message('supports_binary', { value: fig.supports_binary });\n fig.send_message('send_image_mode', {});\n if (fig.ratio !== 1) {\n fig.send_message('set_device_pixel_ratio', {\n device_pixel_ratio: fig.ratio,\n });\n }\n fig.send_message('refresh', {});\n };\n\n this.imageObj.onload = function () {\n if (fig.image_mode === 'full') {\n // Full images could contain transparency (where diff images\n // almost always do), so we need to clear the canvas so that\n // there is no ghosting.\n fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n }\n fig.context.drawImage(fig.imageObj, 0, 0);\n };\n\n this.imageObj.onunload = function () {\n fig.ws.close();\n };\n\n this.ws.onmessage = this._make_on_message_function(this);\n\n this.ondownload = ondownload;\n};\n\nmpl.figure.prototype._init_header = function () {\n var titlebar = document.createElement('div');\n titlebar.classList =\n 'ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix';\n var titletext = document.createElement('div');\n titletext.classList = 'ui-dialog-title';\n titletext.setAttribute(\n 'style',\n 'width: 100%; text-align: center; padding: 3px;'\n );\n titlebar.appendChild(titletext);\n this.root.appendChild(titlebar);\n this.header = titletext;\n};\n\nmpl.figure.prototype._canvas_extra_style = function (_canvas_div) {};\n\nmpl.figure.prototype._root_extra_style = function (_canvas_div) {};\n\nmpl.figure.prototype._init_canvas = function () {\n var fig = this;\n\n var canvas_div = (this.canvas_div = document.createElement('div'));\n canvas_div.setAttribute(\n 'style',\n 'border: 1px solid #ddd;' +\n 'box-sizing: content-box;' +\n 'clear: both;' +\n 'min-height: 1px;' +\n 'min-width: 1px;' +\n 'outline: 0;' +\n 'overflow: hidden;' +\n 'position: relative;' +\n 'resize: both;'\n );\n\n function on_keyboard_event_closure(name) {\n return function (event) {\n return fig.key_event(event, name);\n };\n }\n\n canvas_div.addEventListener(\n 'keydown',\n on_keyboard_event_closure('key_press')\n );\n canvas_div.addEventListener(\n 'keyup',\n on_keyboard_event_closure('key_release')\n );\n\n this._canvas_extra_style(canvas_div);\n this.root.appendChild(canvas_div);\n\n var canvas = (this.canvas = document.createElement('canvas'));\n canvas.classList.add('mpl-canvas');\n canvas.setAttribute('style', 'box-sizing: content-box;');\n\n this.context = canvas.getContext('2d');\n\n var backingStore =\n this.context.backingStorePixelRatio ||\n this.context.webkitBackingStorePixelRatio ||\n this.context.mozBackingStorePixelRatio ||\n this.context.msBackingStorePixelRatio ||\n this.context.oBackingStorePixelRatio ||\n this.context.backingStorePixelRatio ||\n 1;\n\n this.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n var rubberband_canvas = (this.rubberband_canvas = document.createElement(\n 'canvas'\n ));\n rubberband_canvas.setAttribute(\n 'style',\n 'box-sizing: content-box; position: absolute; left: 0; top: 0; z-index: 1;'\n );\n\n // Apply a ponyfill if ResizeObserver is not implemented by browser.\n if (this.ResizeObserver === undefined) {\n if (window.ResizeObserver !== undefined) {\n this.ResizeObserver = window.ResizeObserver;\n } else {\n var obs = _JSXTOOLS_RESIZE_OBSERVER({});\n this.ResizeObserver = obs.ResizeObserver;\n }\n }\n\n this.resizeObserverInstance = new this.ResizeObserver(function (entries) {\n var nentries = entries.length;\n for (var i = 0; i < nentries; i++) {\n var entry = entries[i];\n var width, height;\n if (entry.contentBoxSize) {\n if (entry.contentBoxSize instanceof Array) {\n // Chrome 84 implements new version of spec.\n width = entry.contentBoxSize[0].inlineSize;\n height = entry.contentBoxSize[0].blockSize;\n } else {\n // Firefox implements old version of spec.\n width = entry.contentBoxSize.inlineSize;\n height = entry.contentBoxSize.blockSize;\n }\n } else {\n // Chrome <84 implements even older version of spec.\n width = entry.contentRect.width;\n height = entry.contentRect.height;\n }\n\n // Keep the size of the canvas and rubber band canvas in sync with\n // the canvas container.\n if (entry.devicePixelContentBoxSize) {\n // Chrome 84 implements new version of spec.\n canvas.setAttribute(\n 'width',\n entry.devicePixelContentBoxSize[0].inlineSize\n );\n canvas.setAttribute(\n 'height',\n entry.devicePixelContentBoxSize[0].blockSize\n );\n } else {\n canvas.setAttribute('width', width * fig.ratio);\n canvas.setAttribute('height', height * fig.ratio);\n }\n canvas.setAttribute(\n 'style',\n 'width: ' + width + 'px; height: ' + height + 'px;'\n );\n\n rubberband_canvas.setAttribute('width', width);\n rubberband_canvas.setAttribute('height', height);\n\n // And update the size in Python. We ignore the initial 0/0 size\n // that occurs as the element is placed into the DOM, which should\n // otherwise not happen due to the minimum size styling.\n if (fig.ws.readyState == 1 && width != 0 && height != 0) {\n fig.request_resize(width, height);\n }\n }\n });\n this.resizeObserverInstance.observe(canvas_div);\n\n function on_mouse_event_closure(name) {\n return function (event) {\n return fig.mouse_event(event, name);\n };\n }\n\n rubberband_canvas.addEventListener(\n 'mousedown',\n on_mouse_event_closure('button_press')\n );\n rubberband_canvas.addEventListener(\n 'mouseup',\n on_mouse_event_closure('button_release')\n );\n rubberband_canvas.addEventListener(\n 'dblclick',\n on_mouse_event_closure('dblclick')\n );\n // Throttle sequential mouse events to 1 every 20ms.\n rubberband_canvas.addEventListener(\n 'mousemove',\n on_mouse_event_closure('motion_notify')\n );\n\n rubberband_canvas.addEventListener(\n 'mouseenter',\n on_mouse_event_closure('figure_enter')\n );\n rubberband_canvas.addEventListener(\n 'mouseleave',\n on_mouse_event_closure('figure_leave')\n );\n\n canvas_div.addEventListener('wheel', function (event) {\n if (event.deltaY < 0) {\n event.step = 1;\n } else {\n event.step = -1;\n }\n on_mouse_event_closure('scroll')(event);\n });\n\n canvas_div.appendChild(canvas);\n canvas_div.appendChild(rubberband_canvas);\n\n this.rubberband_context = rubberband_canvas.getContext('2d');\n this.rubberband_context.strokeStyle = '#000000';\n\n this._resize_canvas = function (width, height, forward) {\n if (forward) {\n canvas_div.style.width = width + 'px';\n canvas_div.style.height = height + 'px';\n }\n };\n\n // Disable right mouse context menu.\n this.rubberband_canvas.addEventListener('contextmenu', function (_e) {\n event.preventDefault();\n return false;\n });\n\n function set_focus() {\n canvas.focus();\n canvas_div.focus();\n }\n\n window.setTimeout(set_focus, 100);\n};\n\nmpl.figure.prototype._init_toolbar = function () {\n var fig = this;\n\n var toolbar = document.createElement('div');\n toolbar.classList = 'mpl-toolbar';\n this.root.appendChild(toolbar);\n\n function on_click_closure(name) {\n return function (_event) {\n return fig.toolbar_button_onclick(name);\n };\n }\n\n function on_mouseover_closure(tooltip) {\n return function (event) {\n if (!event.currentTarget.disabled) {\n return fig.toolbar_button_onmouseover(tooltip);\n }\n };\n }\n\n fig.buttons = {};\n var buttonGroup = document.createElement('div');\n buttonGroup.classList = 'mpl-button-group';\n for (var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n /* Instead of a spacer, we start a new button group. */\n if (buttonGroup.hasChildNodes()) {\n toolbar.appendChild(buttonGroup);\n }\n buttonGroup = document.createElement('div');\n buttonGroup.classList = 'mpl-button-group';\n continue;\n }\n\n var button = (fig.buttons[name] = document.createElement('button'));\n button.classList = 'mpl-widget';\n button.setAttribute('role', 'button');\n button.setAttribute('aria-disabled', 'false');\n button.addEventListener('click', on_click_closure(method_name));\n button.addEventListener('mouseover', on_mouseover_closure(tooltip));\n\n var icon_img = document.createElement('img');\n icon_img.src = '_images/' + image + '.png';\n icon_img.srcset = '_images/' + image + '_large.png 2x';\n icon_img.alt = tooltip;\n button.appendChild(icon_img);\n\n buttonGroup.appendChild(button);\n }\n\n if (buttonGroup.hasChildNodes()) {\n toolbar.appendChild(buttonGroup);\n }\n\n var fmt_picker = document.createElement('select');\n fmt_picker.classList = 'mpl-widget';\n toolbar.appendChild(fmt_picker);\n this.format_dropdown = fmt_picker;\n\n for (var ind in mpl.extensions) {\n var fmt = mpl.extensions[ind];\n var option = document.createElement('option');\n option.selected = fmt === mpl.default_extension;\n option.innerHTML = fmt;\n fmt_picker.appendChild(option);\n }\n\n var status_bar = document.createElement('span');\n status_bar.classList = 'mpl-message';\n toolbar.appendChild(status_bar);\n this.message = status_bar;\n};\n\nmpl.figure.prototype.request_resize = function (x_pixels, y_pixels) {\n // Request matplotlib to resize the figure. Matplotlib will then trigger a resize in the client,\n // which will in turn request a refresh of the image.\n this.send_message('resize', { width: x_pixels, height: y_pixels });\n};\n\nmpl.figure.prototype.send_message = function (type, properties) {\n properties['type'] = type;\n properties['figure_id'] = this.id;\n this.ws.send(JSON.stringify(properties));\n};\n\nmpl.figure.prototype.send_draw_message = function () {\n if (!this.waiting) {\n this.waiting = true;\n this.ws.send(JSON.stringify({ type: 'draw', figure_id: this.id }));\n }\n};\n\nmpl.figure.prototype.handle_save = function (fig, _msg) {\n var format_dropdown = fig.format_dropdown;\n var format = format_dropdown.options[format_dropdown.selectedIndex].value;\n fig.ondownload(fig, format);\n};\n\nmpl.figure.prototype.handle_resize = function (fig, msg) {\n var size = msg['size'];\n if (size[0] !== fig.canvas.width || size[1] !== fig.canvas.height) {\n fig._resize_canvas(size[0], size[1], msg['forward']);\n fig.send_message('refresh', {});\n }\n};\n\nmpl.figure.prototype.handle_rubberband = function (fig, msg) {\n var x0 = msg['x0'] / fig.ratio;\n var y0 = (fig.canvas.height - msg['y0']) / fig.ratio;\n var x1 = msg['x1'] / fig.ratio;\n var y1 = (fig.canvas.height - msg['y1']) / fig.ratio;\n x0 = Math.floor(x0) + 0.5;\n y0 = Math.floor(y0) + 0.5;\n x1 = Math.floor(x1) + 0.5;\n y1 = Math.floor(y1) + 0.5;\n var min_x = Math.min(x0, x1);\n var min_y = Math.min(y0, y1);\n var width = Math.abs(x1 - x0);\n var height = Math.abs(y1 - y0);\n\n fig.rubberband_context.clearRect(\n 0,\n 0,\n fig.canvas.width / fig.ratio,\n fig.canvas.height / fig.ratio\n );\n\n fig.rubberband_context.strokeRect(min_x, min_y, width, height);\n};\n\nmpl.figure.prototype.handle_figure_label = function (fig, msg) {\n // Updates the figure title.\n fig.header.textContent = msg['label'];\n};\n\nmpl.figure.prototype.handle_cursor = function (fig, msg) {\n fig.rubberband_canvas.style.cursor = msg['cursor'];\n};\n\nmpl.figure.prototype.handle_message = function (fig, msg) {\n fig.message.textContent = msg['message'];\n};\n\nmpl.figure.prototype.handle_draw = function (fig, _msg) {\n // Request the server to send over a new figure.\n fig.send_draw_message();\n};\n\nmpl.figure.prototype.handle_image_mode = function (fig, msg) {\n fig.image_mode = msg['mode'];\n};\n\nmpl.figure.prototype.handle_history_buttons = function (fig, msg) {\n for (var key in msg) {\n if (!(key in fig.buttons)) {\n continue;\n }\n fig.buttons[key].disabled = !msg[key];\n fig.buttons[key].setAttribute('aria-disabled', !msg[key]);\n }\n};\n\nmpl.figure.prototype.handle_navigate_mode = function (fig, msg) {\n if (msg['mode'] === 'PAN') {\n fig.buttons['Pan'].classList.add('active');\n fig.buttons['Zoom'].classList.remove('active');\n } else if (msg['mode'] === 'ZOOM') {\n fig.buttons['Pan'].classList.remove('active');\n fig.buttons['Zoom'].classList.add('active');\n } else {\n fig.buttons['Pan'].classList.remove('active');\n fig.buttons['Zoom'].classList.remove('active');\n }\n};\n\nmpl.figure.prototype.updated_canvas_event = function () {\n // Called whenever the canvas gets updated.\n this.send_message('ack', {});\n};\n\n// A function to construct a web socket function for onmessage handling.\n// Called in the figure constructor.\nmpl.figure.prototype._make_on_message_function = function (fig) {\n return function socket_on_message(evt) {\n if (evt.data instanceof Blob) {\n var img = evt.data;\n if (img.type !== 'image/png') {\n /* FIXME: We get \"Resource interpreted as Image but\n * transferred with MIME type text/plain:\" errors on\n * Chrome. But how to set the MIME type? It doesn't seem\n * to be part of the websocket stream */\n img.type = 'image/png';\n }\n\n /* Free the memory for the previous frames */\n if (fig.imageObj.src) {\n (window.URL || window.webkitURL).revokeObjectURL(\n fig.imageObj.src\n );\n }\n\n fig.imageObj.src = (window.URL || window.webkitURL).createObjectURL(\n img\n );\n fig.updated_canvas_event();\n fig.waiting = false;\n return;\n } else if (\n typeof evt.data === 'string' &&\n evt.data.slice(0, 21) === 'data:image/png;base64'\n ) {\n fig.imageObj.src = evt.data;\n fig.updated_canvas_event();\n fig.waiting = false;\n return;\n }\n\n var msg = JSON.parse(evt.data);\n var msg_type = msg['type'];\n\n // Call the \"handle_{type}\" callback, which takes\n // the figure and JSON message as its only arguments.\n try {\n var callback = fig['handle_' + msg_type];\n } catch (e) {\n console.log(\n \"No handler for the '\" + msg_type + \"' message type: \",\n msg\n );\n return;\n }\n\n if (callback) {\n try {\n // console.log(\"Handling '\" + msg_type + \"' message: \", msg);\n callback(fig, msg);\n } catch (e) {\n console.log(\n \"Exception inside the 'handler_\" + msg_type + \"' callback:\",\n e,\n e.stack,\n msg\n );\n }\n }\n };\n};\n\n// from https://stackoverflow.com/questions/1114465/getting-mouse-location-in-canvas\nmpl.findpos = function (e) {\n //this section is from http://www.quirksmode.org/js/events_properties.html\n var targ;\n if (!e) {\n e = window.event;\n }\n if (e.target) {\n targ = e.target;\n } else if (e.srcElement) {\n targ = e.srcElement;\n }\n if (targ.nodeType === 3) {\n // defeat Safari bug\n targ = targ.parentNode;\n }\n\n // pageX,Y are the mouse positions relative to the document\n var boundingRect = targ.getBoundingClientRect();\n var x = e.pageX - (boundingRect.left + document.body.scrollLeft);\n var y = e.pageY - (boundingRect.top + document.body.scrollTop);\n\n return { x: x, y: y };\n};\n\n/*\n * return a copy of an object with only non-object keys\n * we need this to avoid circular references\n * https://stackoverflow.com/a/24161582/3208463\n */\nfunction simpleKeys(original) {\n return Object.keys(original).reduce(function (obj, key) {\n if (typeof original[key] !== 'object') {\n obj[key] = original[key];\n }\n return obj;\n }, {});\n}\n\nmpl.figure.prototype.mouse_event = function (event, name) {\n var canvas_pos = mpl.findpos(event);\n\n if (name === 'button_press') {\n this.canvas.focus();\n this.canvas_div.focus();\n }\n\n var x = canvas_pos.x * this.ratio;\n var y = canvas_pos.y * this.ratio;\n\n this.send_message(name, {\n x: x,\n y: y,\n button: event.button,\n step: event.step,\n guiEvent: simpleKeys(event),\n });\n\n /* This prevents the web browser from automatically changing to\n * the text insertion cursor when the button is pressed. We want\n * to control all of the cursor setting manually through the\n * 'cursor' event from matplotlib */\n event.preventDefault();\n return false;\n};\n\nmpl.figure.prototype._key_event_extra = function (_event, _name) {\n // Handle any extra behaviour associated with a key event\n};\n\nmpl.figure.prototype.key_event = function (event, name) {\n // Prevent repeat events\n if (name === 'key_press') {\n if (event.key === this._key) {\n return;\n } else {\n this._key = event.key;\n }\n }\n if (name === 'key_release') {\n this._key = null;\n }\n\n var value = '';\n if (event.ctrlKey && event.key !== 'Control') {\n value += 'ctrl+';\n }\n else if (event.altKey && event.key !== 'Alt') {\n value += 'alt+';\n }\n else if (event.shiftKey && event.key !== 'Shift') {\n value += 'shift+';\n }\n\n value += 'k' + event.key;\n\n this._key_event_extra(event, name);\n\n this.send_message(name, { key: value, guiEvent: simpleKeys(event) });\n return false;\n};\n\nmpl.figure.prototype.toolbar_button_onclick = function (name) {\n if (name === 'download') {\n this.handle_save(this, null);\n } else {\n this.send_message('toolbar_button', { name: name });\n }\n};\n\nmpl.figure.prototype.toolbar_button_onmouseover = function (tooltip) {\n this.message.textContent = tooltip;\n};\n\n///////////////// REMAINING CONTENT GENERATED BY embed_js.py /////////////////\n// prettier-ignore\nvar _JSXTOOLS_RESIZE_OBSERVER=function(A){var t,i=new WeakMap,n=new WeakMap,a=new WeakMap,r=new WeakMap,o=new Set;function s(e){if(!(this instanceof s))throw new TypeError(\"Constructor requires 'new' operator\");i.set(this,e)}function h(){throw new TypeError(\"Function is not a constructor\")}function c(e,t,i,n){e=0 in arguments?Number(arguments[0]):0,t=1 in arguments?Number(arguments[1]):0,i=2 in arguments?Number(arguments[2]):0,n=3 in arguments?Number(arguments[3]):0,this.right=(this.x=this.left=e)+(this.width=i),this.bottom=(this.y=this.top=t)+(this.height=n),Object.freeze(this)}function d(){t=requestAnimationFrame(d);var s=new WeakMap,p=new Set;o.forEach((function(t){r.get(t).forEach((function(i){var r=t instanceof window.SVGElement,o=a.get(t),d=r?0:parseFloat(o.paddingTop),f=r?0:parseFloat(o.paddingRight),l=r?0:parseFloat(o.paddingBottom),u=r?0:parseFloat(o.paddingLeft),g=r?0:parseFloat(o.borderTopWidth),m=r?0:parseFloat(o.borderRightWidth),w=r?0:parseFloat(o.borderBottomWidth),b=u+f,F=d+l,v=(r?0:parseFloat(o.borderLeftWidth))+m,W=g+w,y=r?0:t.offsetHeight-W-t.clientHeight,E=r?0:t.offsetWidth-v-t.clientWidth,R=b+v,z=F+W,M=r?t.width:parseFloat(o.width)-R-E,O=r?t.height:parseFloat(o.height)-z-y;if(n.has(t)){var k=n.get(t);if(k[0]===M&&k[1]===O)return}n.set(t,[M,O]);var S=Object.create(h.prototype);S.target=t,S.contentRect=new c(u,d,M,O),s.has(i)||(s.set(i,[]),p.add(i)),s.get(i).push(S)}))})),p.forEach((function(e){i.get(e).call(e,s.get(e),e)}))}return s.prototype.observe=function(i){if(i instanceof window.Element){r.has(i)||(r.set(i,new Set),o.add(i),a.set(i,window.getComputedStyle(i)));var n=r.get(i);n.has(this)||n.add(this),cancelAnimationFrame(t),t=requestAnimationFrame(d)}},s.prototype.unobserve=function(i){if(i instanceof window.Element&&r.has(i)){var n=r.get(i);n.has(this)&&(n.delete(this),n.size||(r.delete(i),o.delete(i))),n.size||r.delete(i),o.size||cancelAnimationFrame(t)}},A.DOMRectReadOnly=c,A.ResizeObserver=s,A.ResizeObserverEntry=h,A}; // eslint-disable-line\nmpl.toolbar_items = [[\"Home\", \"Reset original view\", \"fa fa-home icon-home\", \"home\"], [\"Back\", \"Back to previous view\", \"fa fa-arrow-left icon-arrow-left\", \"back\"], [\"Forward\", \"Forward to next view\", \"fa fa-arrow-right icon-arrow-right\", \"forward\"], [\"\", \"\", \"\", \"\"], [\"Pan\", \"Left button pans, Right button zooms\\nx/y fixes axis, CTRL fixes aspect\", \"fa fa-arrows icon-move\", \"pan\"], [\"Zoom\", \"Zoom to rectangle\\nx/y fixes axis\", \"fa fa-square-o icon-check-empty\", \"zoom\"], [\"\", \"\", \"\", \"\"], [\"Download\", \"Download plot\", \"fa fa-floppy-o icon-save\", \"download\"]];\n\nmpl.extensions = [\"eps\", \"jpeg\", \"pgf\", \"pdf\", \"png\", \"ps\", \"raw\", \"svg\", \"tif\"];\n\nmpl.default_extension = \"png\";/* global mpl */\n\nvar comm_websocket_adapter = function (comm) {\n // Create a \"websocket\"-like object which calls the given IPython comm\n // object with the appropriate methods. Currently this is a non binary\n // socket, so there is still some room for performance tuning.\n var ws = {};\n\n ws.binaryType = comm.kernel.ws.binaryType;\n ws.readyState = comm.kernel.ws.readyState;\n function updateReadyState(_event) {\n if (comm.kernel.ws) {\n ws.readyState = comm.kernel.ws.readyState;\n } else {\n ws.readyState = 3; // Closed state.\n }\n }\n comm.kernel.ws.addEventListener('open', updateReadyState);\n comm.kernel.ws.addEventListener('close', updateReadyState);\n comm.kernel.ws.addEventListener('error', updateReadyState);\n\n ws.close = function () {\n comm.close();\n };\n ws.send = function (m) {\n //console.log('sending', m);\n comm.send(m);\n };\n // Register the callback with on_msg.\n comm.on_msg(function (msg) {\n //console.log('receiving', msg['content']['data'], msg);\n var data = msg['content']['data'];\n if (data['blob'] !== undefined) {\n data = {\n data: new Blob(msg['buffers'], { type: data['blob'] }),\n };\n }\n // Pass the mpl event to the overridden (by mpl) onmessage function.\n ws.onmessage(data);\n });\n return ws;\n};\n\nmpl.mpl_figure_comm = function (comm, msg) {\n // This is the function which gets called when the mpl process\n // starts-up an IPython Comm through the \"matplotlib\" channel.\n\n var id = msg.content.data.id;\n // Get hold of the div created by the display call when the Comm\n // socket was opened in Python.\n var element = document.getElementById(id);\n var ws_proxy = comm_websocket_adapter(comm);\n\n function ondownload(figure, _format) {\n window.open(figure.canvas.toDataURL());\n }\n\n var fig = new mpl.figure(id, ws_proxy, ondownload, element);\n\n // Call onopen now - mpl needs it, as it is assuming we've passed it a real\n // web socket which is closed, not our websocket->open comm proxy.\n ws_proxy.onopen();\n\n fig.parent_element = element;\n fig.cell_info = mpl.find_output_cell(\"
\");\n if (!fig.cell_info) {\n console.error('Failed to find cell for figure', id, fig);\n return;\n }\n fig.cell_info[0].output_area.element.on(\n 'cleared',\n { fig: fig },\n fig._remove_fig_handler\n );\n};\n\nmpl.figure.prototype.handle_close = function (fig, msg) {\n var width = fig.canvas.width / fig.ratio;\n fig.cell_info[0].output_area.element.off(\n 'cleared',\n fig._remove_fig_handler\n );\n fig.resizeObserverInstance.unobserve(fig.canvas_div);\n\n // Update the output cell to use the data from the current canvas.\n fig.push_to_output();\n var dataURL = fig.canvas.toDataURL();\n // Re-enable the keyboard manager in IPython - without this line, in FF,\n // the notebook keyboard shortcuts fail.\n IPython.keyboard_manager.enable();\n fig.parent_element.innerHTML =\n ' ';\n fig.close_ws(fig, msg);\n};\n\nmpl.figure.prototype.close_ws = function (fig, msg) {\n fig.send_message('closing', msg);\n // fig.ws.close()\n};\n\nmpl.figure.prototype.push_to_output = function (_remove_interactive) {\n // Turn the data on the canvas into data in the output cell.\n var width = this.canvas.width / this.ratio;\n var dataURL = this.canvas.toDataURL();\n this.cell_info[1]['text/html'] =\n ' ';\n};\n\nmpl.figure.prototype.updated_canvas_event = function () {\n // Tell IPython that the notebook contents must change.\n IPython.notebook.set_dirty(true);\n this.send_message('ack', {});\n var fig = this;\n // Wait a second, then push the new image to the DOM so\n // that it is saved nicely (might be nice to debounce this).\n setTimeout(function () {\n fig.push_to_output();\n }, 1000);\n};\n\nmpl.figure.prototype._init_toolbar = function () {\n var fig = this;\n\n var toolbar = document.createElement('div');\n toolbar.classList = 'btn-toolbar';\n this.root.appendChild(toolbar);\n\n function on_click_closure(name) {\n return function (_event) {\n return fig.toolbar_button_onclick(name);\n };\n }\n\n function on_mouseover_closure(tooltip) {\n return function (event) {\n if (!event.currentTarget.disabled) {\n return fig.toolbar_button_onmouseover(tooltip);\n }\n };\n }\n\n fig.buttons = {};\n var buttonGroup = document.createElement('div');\n buttonGroup.classList = 'btn-group';\n var button;\n for (var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n /* Instead of a spacer, we start a new button group. */\n if (buttonGroup.hasChildNodes()) {\n toolbar.appendChild(buttonGroup);\n }\n buttonGroup = document.createElement('div');\n buttonGroup.classList = 'btn-group';\n continue;\n }\n\n button = fig.buttons[name] = document.createElement('button');\n button.classList = 'btn btn-default';\n button.href = '#';\n button.title = name;\n button.innerHTML = ' ';\n button.addEventListener('click', on_click_closure(method_name));\n button.addEventListener('mouseover', on_mouseover_closure(tooltip));\n buttonGroup.appendChild(button);\n }\n\n if (buttonGroup.hasChildNodes()) {\n toolbar.appendChild(buttonGroup);\n }\n\n // Add the status bar.\n var status_bar = document.createElement('span');\n status_bar.classList = 'mpl-message pull-right';\n toolbar.appendChild(status_bar);\n this.message = status_bar;\n\n // Add the close button to the window.\n var buttongrp = document.createElement('div');\n buttongrp.classList = 'btn-group inline pull-right';\n button = document.createElement('button');\n button.classList = 'btn btn-mini btn-primary';\n button.href = '#';\n button.title = 'Stop Interaction';\n button.innerHTML = ' ';\n button.addEventListener('click', function (_evt) {\n fig.handle_close(fig, {});\n });\n button.addEventListener(\n 'mouseover',\n on_mouseover_closure('Stop Interaction')\n );\n buttongrp.appendChild(button);\n var titlebar = this.root.querySelector('.ui-dialog-titlebar');\n titlebar.insertBefore(buttongrp, titlebar.firstChild);\n};\n\nmpl.figure.prototype._remove_fig_handler = function (event) {\n var fig = event.data.fig;\n if (event.target !== this) {\n // Ignore bubbled events from children.\n return;\n }\n fig.close_ws(fig, {});\n};\n\nmpl.figure.prototype._root_extra_style = function (el) {\n el.style.boxSizing = 'content-box'; // override notebook setting of border-box.\n};\n\nmpl.figure.prototype._canvas_extra_style = function (el) {\n // this is important to make the div 'focusable\n el.setAttribute('tabindex', 0);\n // reach out to IPython and tell the keyboard manager to turn it's self\n // off when our div gets focus\n\n // location in version 3\n if (IPython.notebook.keyboard_manager) {\n IPython.notebook.keyboard_manager.register_events(el);\n } else {\n // location in version 2\n IPython.keyboard_manager.register_events(el);\n }\n};\n\nmpl.figure.prototype._key_event_extra = function (event, _name) {\n // Check for shift+enter\n if (event.shiftKey && event.which === 13) {\n this.canvas_div.blur();\n // select the cell after this one\n var index = IPython.notebook.find_cell_index(this.cell_info[0]);\n IPython.notebook.select(index + 1);\n }\n};\n\nmpl.figure.prototype.handle_save = function (fig, _msg) {\n fig.ondownload(fig, null);\n};\n\nmpl.find_output_cell = function (html_output) {\n // Return the cell and output element which can be found *uniquely* in the notebook.\n // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n // IPython event is triggered only after the cells have been serialised, which for\n // our purposes (turning an active figure into a static one), is too late.\n var cells = IPython.notebook.get_cells();\n var ncells = cells.length;\n for (var i = 0; i < ncells; i++) {\n var cell = cells[i];\n if (cell.cell_type === 'code') {\n for (var j = 0; j < cell.output_area.outputs.length; j++) {\n var data = cell.output_area.outputs[j];\n if (data.data) {\n // IPython >= 3 moved mimebundle to data attribute of output\n data = data.data;\n }\n if (data['text/html'] === html_output) {\n return [cell, data, j];\n }\n }\n }\n }\n};\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel !== null) {\n IPython.notebook.kernel.comm_manager.register_target(\n 'matplotlib',\n mpl.mpl_figure_comm\n );\n}\n",
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- },
- {
- "data": {
- "text/html": [
- " "
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "fig = plt.figure(1)\n",
- "ax = fig.add_subplot(projection='3d')\n",
- "rot_show = R.from_quat(Result_rotation).as_matrix()\n",
- "ref_rot_show = R.from_quat(Reference_rotation).as_matrix()\n",
- "EE_position = np.array([0,0,0])\n",
- "xv = np.array(rot_show @ [1,0,0])/100\n",
- "yv = np.array(rot_show @ [0,1,0])/100\n",
- "zv = np.array(rot_show @ [0,0,1])/100\n",
- "ax.plot3D([0,old_3_position[0]],[0,old_3_position[1]],[0,old_3_position[2]],'-o',color = 'black')\n",
- "ax.plot3D([0,Result_position[0]],[0,Result_position[1]],[0,Result_position[2]],'-o',color = 'orange')\n",
- "ax.quiver(*Result_position.T,*xv.T, color='red')\n",
- "ax.quiver(*Result_position.T,*yv.T, color='g')\n",
- "ax.quiver(*Result_position.T,*zv.T, color='b')\n",
- "ax.quiver(*EE_position.T,*np.array([1,0,0]).T/100, color='red')\n",
- "ax.quiver(*EE_position.T,*np.array([0,1,0]).T/100, color='g')\n",
- "ax.quiver(*EE_position.T,*np.array([0,0,1]).T/100, color='b')\n",
- "set_axes_equal(ax)\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 45,
- "id": "be7a0c51",
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3.8.10 64-bit",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.8.10"
- },
- "vscode": {
- "interpreter": {
- "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
- }
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}
diff --git a/UR_arm_camera_calibration-main/images/Calibration_process.png b/UR_arm_camera_calibration-main/images/Calibration_process.png
deleted file mode 100644
index d2a60b0..0000000
Binary files a/UR_arm_camera_calibration-main/images/Calibration_process.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/Camera_Intrinsics.png b/UR_arm_camera_calibration-main/images/Camera_Intrinsics.png
deleted file mode 100644
index 916987a..0000000
Binary files a/UR_arm_camera_calibration-main/images/Camera_Intrinsics.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/Hand_eye_calibration.png b/UR_arm_camera_calibration-main/images/Hand_eye_calibration.png
deleted file mode 100644
index e0abba8..0000000
Binary files a/UR_arm_camera_calibration-main/images/Hand_eye_calibration.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/Laser_Cali.png b/UR_arm_camera_calibration-main/images/Laser_Cali.png
deleted file mode 100644
index 839aa7b..0000000
Binary files a/UR_arm_camera_calibration-main/images/Laser_Cali.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/rqt0.png b/UR_arm_camera_calibration-main/images/rqt0.png
deleted file mode 100644
index 99711e5..0000000
Binary files a/UR_arm_camera_calibration-main/images/rqt0.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/rqt1.png b/UR_arm_camera_calibration-main/images/rqt1.png
deleted file mode 100644
index ad9bb5a..0000000
Binary files a/UR_arm_camera_calibration-main/images/rqt1.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/rqt2.png b/UR_arm_camera_calibration-main/images/rqt2.png
deleted file mode 100644
index cbe75e1..0000000
Binary files a/UR_arm_camera_calibration-main/images/rqt2.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/rqt3.png b/UR_arm_camera_calibration-main/images/rqt3.png
deleted file mode 100644
index 607bd6c..0000000
Binary files a/UR_arm_camera_calibration-main/images/rqt3.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/images/tag_size_example.png b/UR_arm_camera_calibration-main/images/tag_size_example.png
deleted file mode 100644
index 52b3564..0000000
Binary files a/UR_arm_camera_calibration-main/images/tag_size_example.png and /dev/null differ
diff --git a/UR_arm_camera_calibration-main/multi_calibration/CMakeLists.txt b/UR_arm_camera_calibration-main/multi_calibration/CMakeLists.txt
deleted file mode 100644
index a031e62..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/CMakeLists.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-cmake_minimum_required(VERSION 3.0.2)
-project(multi_calibration)
-
-set(CMAKE_BUILD_TYPE "Release")
-set(CMAKE_CXX_FLAGS "-std=c++14")
-set(CMAKE_CXX_FLAGS_RELEASE "-O3 -Wall -g")
-
-## Compile as C++11, supported in ROS Kinetic and newer
-# add_compile_options(-std=c++11)
-
-## Find catkin macros and libraries
-## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
-## is used, also find other catkin packages
-find_package(catkin REQUIRED COMPONENTS
- roscpp
- std_msgs
- sensor_msgs
- geometry_msgs
- cv_bridge
- pcl_conversions
- pcl_ros
- moveit_core
- moveit_ros_planning
- moveit_ros_planning_interface
- moveit_visual_tools
- camera_model
- image_transport
- message_generation
- eigen_conversions
- tf
- tf_conversions
-)
-
-find_package(OpenCV REQUIRED)
-find_package(Eigen3 REQUIRED)
-find_package(PCL REQUIRED)
-
-## System dependencies are found with CMake's conventions
-# find_package(Boost REQUIRED COMPONENTS system)
-
-
-## Uncomment this if the package has a setup.py. This macro ensures
-## modules and global scripts declared therein get installed
-## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
-# catkin_python_setup()
-
-################################################
-## Declare ROS messages, services and actions ##
-################################################
-
-## To declare and build messages, services or actions from within this
-## package, follow these steps:
-## * Let MSG_DEP_SET be the set of packages whose message types you use in
-## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
-## * In the file package.xml:
-## * add a build_depend tag for "message_generation"
-## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
-## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
-## but can be declared for certainty nonetheless:
-## * add a exec_depend tag for "message_runtime"
-## * In this file (CMakeLists.txt):
-## * add "message_generation" and every package in MSG_DEP_SET to
-## find_package(catkin REQUIRED COMPONENTS ...)
-## * add "message_runtime" and every package in MSG_DEP_SET to
-## catkin_package(CATKIN_DEPENDS ...)
-## * uncomment the add_*_files sections below as needed
-## and list every .msg/.srv/.action file to be processed
-## * uncomment the generate_messages entry below
-## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
-
-## Generate messages in the 'msg' folder
-# add_message_files(
-# FILES
-# Message1.msg
-# Message2.msg
-# )
-
-## Generate services in the 'srv' folder
-# add_service_files(
-# FILES
-# PointCloudStitcher.srv
-# )
-
-## Generate actions in the 'action' folder
-# add_action_files(
-# FILES
-# Action1.action
-# Action2.action
-# )
-
-## Generate added messages and services with any dependencies listed here
-# generate_messages(
-# DEPENDENCIES
-# std_msgs
-# )
-
-################################################
-## Declare ROS dynamic reconfigure parameters ##
-################################################
-
-## To declare and build dynamic reconfigure parameters within this
-## package, follow these steps:
-## * In the file package.xml:
-## * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
-## * In this file (CMakeLists.txt):
-## * add "dynamic_reconfigure" to
-## find_package(catkin REQUIRED COMPONENTS ...)
-## * uncomment the "generate_dynamic_reconfigure_options" section below
-## and list every .cfg file to be processed
-
-## Generate dynamic reconfigure parameters in the 'cfg' folder
-# generate_dynamic_reconfigure_options(
-# cfg/DynReconf1.cfg
-# cfg/DynReconf2.cfg
-# )
-
-###################################
-## catkin specific configuration ##
-###################################
-## The catkin_package macro generates cmake config files for your package
-## Declare things to be passed to dependent projects
-## INCLUDE_DIRS: uncomment this if your package contains header files
-## LIBRARIES: libraries you create in this project that dependent projects also need
-## CATKIN_DEPENDS: catkin_packages dependent projects also need
-## DEPENDS: system dependencies of this project that dependent projects also need
-catkin_package(
- INCLUDE_DIRS include
- # LIBRARIES gripper_blaser_ros
- CATKIN_DEPENDS roscpp std_msgs sensor_msgs geometry_msgs message_runtime
- # DEPENDS system_lib
-)
-
-### FOR PROFILING (need to set after catkin_package)
-# add_compile_options(-pg -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_FLAGS="-fsanitize=address")
-set(catkin_LIBRARIES ${catkin_LIBRARIES} -pg)
-###
-
-###########
-## Build ##
-###########
-
-## Specify additional locations of header files
-## Your package locations should be listed before other locations
-include_directories(
- include
- ${catkin_INCLUDE_DIRS}
-)
-
-include_directories(${PCL_INCLUDE_DIRS})
-link_directories(${PCL_LIBRARY_DIRS})
-add_definitions(${PCL_DEFINITIONS})
-
-add_executable(trajectory_planner_node
- src/trajectory_planner_node.cpp
- src/trajectory_planner.cpp
- src/plannerFactory.cpp
- src/scanning_planner.cpp
-)
-
-add_executable(image_saving_node
- src/image_saving_node.cpp
-)
-
-add_executable(image_rect_node
- src/image_rect_node.cpp
-)
-
-target_link_libraries(trajectory_planner_node ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS} ${Boost_LIBRARIES})
-target_link_libraries(image_saving_node ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS})
-target_link_libraries(image_rect_node ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS})
-
-
diff --git a/UR_arm_camera_calibration-main/multi_calibration/cfg/calib_params.yaml b/UR_arm_camera_calibration-main/multi_calibration/cfg/calib_params.yaml
deleted file mode 100644
index c0d7131..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/cfg/calib_params.yaml
+++ /dev/null
@@ -1,60 +0,0 @@
-# Primary blaser (with Magsafe) params
-# rectCameraMatrix:
-# - 6.6760052591980673e+02
-# - 6.6702754763359098e+02
-# - 1.2238794285085744e+03
-# - 9.1994665573314273e+02
-# handeyeTransform:
-# - -0.04196641
-# - 0.0012018
-# - 0.05935809
-# - -0.14216268
-# - 0.1234418
-# - -0.69520724
-# - 0.69371377
-# laserPlane:
-# - -3.10622
-# - -102.576
-# - -100
-# - 11.1776
-
-# Backup blaser (without Magsafe) with toolchanger params
-rectCameraMatrix:
- - 763.353
- - 762.501
- - 1223.29
- - 919.153
-handeyeTransform:
- - -0.04196641
- - 0.0012018
- - 0.05935809
- - -0.14216268
- - 0.1234418
- - -0.69520724
- - 0.69371377
-laserPlane:
- - -5.03368
- - -103.61
- - -100
- - 11.1094
-
-
-# laser Detector params
-
-ROI_x: 150
-ROI_y: 250
-ROI_width: 1630
-ROI_height: 1200
-img_width: 2448
-img_height: 1840
-hue_min: 10
-hue_max: 30
-sat_min: 150
-val_min: 200
-val_ratio: 0.05
-f_vis: false
-
-cameraTopic: "/ximea_cam/image_raw/compressed" # Got rid of the visual republish
-statusTopic: "/execution_status"
-pcTopic: "/laser_point_cloud"
-
diff --git a/UR_arm_camera_calibration-main/multi_calibration/cfg/rviz_config_demo.rviz b/UR_arm_camera_calibration-main/multi_calibration/cfg/rviz_config_demo.rviz
deleted file mode 100644
index 5c89df3..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/cfg/rviz_config_demo.rviz
+++ /dev/null
@@ -1,296 +0,0 @@
-Panels:
- - Class: rviz/Displays
- Help Height: 138
- Name: Displays
- Property Tree Widget:
- Expanded:
- - /Global Options1
- - /Status1
- - /PointCloud21
- - /PointCloud21/Autocompute Value Bounds1
- - /TF1
- - /TF1/Frames1
- - /Image1
- - /RobotModel1
- - /RobotModel1/Status1
- - /RobotModel1/Links1
- Splitter Ratio: 0.5
- Tree Height: 1520
- - Class: rviz/Selection
- Name: Selection
- - Class: rviz/Tool Properties
- Expanded:
- - /2D Pose Estimate1
- - /2D Nav Goal1
- - /Publish Point1
- Name: Tool Properties
- Splitter Ratio: 0.5886790156364441
- - Class: rviz/Views
- Expanded:
- - /Current View1
- Name: Views
- Splitter Ratio: 0.5
- - Class: rviz/Time
- Experimental: false
- Name: Time
- SyncMode: 0
- SyncSource: Image
-Preferences:
- PromptSaveOnExit: true
-Toolbars:
- toolButtonStyle: 2
-Visualization Manager:
- Class: ""
- Displays:
- - Alpha: 0.5
- Cell Size: 1
- Class: rviz/Grid
- Color: 160; 160; 164
- Enabled: true
- Line Style:
- Line Width: 0.029999999329447746
- Value: Lines
- Name: Grid
- Normal Cell Count: 0
- Offset:
- X: 0
- Y: 0
- Z: 0
- Plane: XY
- Plane Cell Count: 10
- Reference Frame:
- Value: true
- - Alpha: 1
- Autocompute Intensity Bounds: true
- Autocompute Value Bounds:
- Max Value: -0.5803430080413818
- Min Value: -0.6102539896965027
- Value: false
- Axis: Y
- Channel Name: intensity
- Class: rviz/PointCloud2
- Color: 255; 255; 255
- Color Transformer: AxisColor
- Decay Time: 10000
- Enabled: true
- Invert Rainbow: false
- Max Color: 255; 255; 255
- Min Color: 0; 0; 0
- Name: PointCloud2
- Position Transformer: XYZ
- Queue Size: 10
- Selectable: true
- Size (Pixels): 3
- Size (m): 0.009999999776482582
- Style: Points
- Topic: /laser_point_cloud/filtered
- Unreliable: false
- Use Fixed Frame: true
- Use rainbow: true
- Value: true
- - Class: rviz/TF
- Enabled: false
- Frame Timeout: 1e+06
- Frames:
- All Enabled: true
- Marker Alpha: 1
- Marker Scale: 0.30000001192092896
- Name: TF
- Show Arrows: true
- Show Axes: true
- Show Names: true
- Tree:
- {}
- Update Interval: 0
- Value: false
- - Class: rviz/Image
- Enabled: true
- Image Topic: /ximea_cam/image_raw
- Max Value: 1
- Median window: 5
- Min Value: 0
- Name: Image
- Normalize Range: true
- Queue Size: 2
- Transport Hint: compressed
- Unreliable: false
- Value: true
- - Alpha: 1
- Class: rviz/RobotModel
- Collision Enabled: false
- Enabled: true
- Links:
- All Links Enabled: false
- Expand Joint Details: false
- Expand Link Details: false
- Expand Tree: false
- Link Tree Style: Links in Alphabetic Order
- back_plane:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- base:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- base_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- base_link_inertia:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- bed:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: false
- bed2:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: false
- blaser:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- bottom_plane:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- flange:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- forearm_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- front_plane:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- left_plane:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- right_plane:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- shoulder_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- tool0:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- upper_arm_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- world:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- wrist_1_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- wrist_2_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- wrist_3_link:
- Alpha: 1
- Show Axes: false
- Show Trail: false
- Value: true
- Name: RobotModel
- Robot Description: robot_description
- TF Prefix: ""
- Update Interval: 0
- Value: true
- Visual Enabled: true
- Enabled: true
- Global Options:
- Background Color: 48; 48; 48
- Default Light: true
- Fixed Frame: base_link
- Frame Rate: 30
- Name: root
- Tools:
- - Class: rviz/Interact
- Hide Inactive Objects: true
- - Class: rviz/MoveCamera
- - Class: rviz/Select
- - Class: rviz/FocusCamera
- - Class: rviz/Measure
- - Class: rviz/SetInitialPose
- Theta std deviation: 0.2617993950843811
- Topic: /initialpose
- X std deviation: 0.5
- Y std deviation: 0.5
- - Class: rviz/SetGoal
- Topic: /move_base_simple/goal
- - Class: rviz/PublishPoint
- Single click: true
- Topic: /clicked_point
- Value: true
- Views:
- Current:
- Class: rviz/Orbit
- Distance: 0.41429978609085083
- Enable Stereo Rendering:
- Stereo Eye Separation: 0.05999999865889549
- Stereo Focal Distance: 1
- Swap Stereo Eyes: false
- Value: false
- Field of View: 0.7853981852531433
- Focal Point:
- X: -0.12903162837028503
- Y: 0.4833604395389557
- Z: 0.5293453931808472
- Focal Shape Fixed Size: true
- Focal Shape Size: 0.05000000074505806
- Invert Z Axis: false
- Name: Current View
- Near Clip Distance: 0.009999999776482582
- Pitch: -0.1402035504579544
- Target Frame:
- Yaw: 4.698592185974121
- Saved: ~
-Window Geometry:
- Displays:
- collapsed: false
- Height: 2032
- Hide Left Dock: false
- Hide Right Dock: false
- Image:
- collapsed: false
- QMainWindow State: 000000ff00000000fd0000000400000000000001f7000006e8fc0200000008fb0000001200530065006c0065006300740069006f006e00000001e10000009b000000b000fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000b0fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000006e000006e80000018200fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c0000026100000001000003b9000006e8fc0200000004fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000006e000003ab0000013200fffffffb0000000a0049006d0061006700650100000425000003310000002600fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e1000001970000000300000e7000000060fc0100000002fb0000000800540069006d0065010000000000000e700000057100fffffffb0000000800540069006d00650100000000000004500000000000000000000008a8000006e800000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
- Selection:
- collapsed: false
- Time:
- collapsed: false
- Tool Properties:
- collapsed: false
- Views:
- collapsed: false
- Width: 3696
- X: 144
- Y: 54
diff --git a/UR_arm_camera_calibration-main/multi_calibration/cfg/trajectory_planner.yaml b/UR_arm_camera_calibration-main/multi_calibration/cfg/trajectory_planner.yaml
deleted file mode 100644
index f6d246c..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/cfg/trajectory_planner.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-homePositions:
- - 1.570796 # shoulder_pan_joint
- - -2.35619449019 # shoulder_lift_joint
- - 2.35619449019 # elbow_joint
- - -3.14159 # wrist_1_joint
- - -1.570796 # wrist_2_joint
- - 0.0 # wrist_3_joint
-
-bedOrigin:
- - -0.113007152 # x
- - 0.494493063 # y
- - 0.554776374 # z
-
-cartVelocityScaling: 0.07
-cartAccelerationScaling: 0.02
-
-# Camera Intrinsic Calibration parameters
-pointPerSide: 4
-pointPerHeight: 2
-
-# Handeye Calibration parameters
-numPointsPerVertArc: 10
-numVertArc: 18
-numRadius: 2
-
-# Laser Cam calibration parameters
-numLinePerPlane: 4
-numPlaneAngles: 3
-numPlanePerCircle: 16
-
-# Scanning parameters
-rightPositions:
- - -0.42655
- - -1.88600
- - -1.5
- - 0.212
- - -1.107
- - 0
-pointResolution: 30
-searchRadius: 0.03
-posThreshold: 0.001
-rotThreshold: 0.01
-pc_in_topic: "/laser_point_cloud/filtered"
-sp_verbose: True
-goalReachedCheckInterval: 0.3
-map_directory: "/home/wu/Documents/biorobotics/"
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/multi_calibration/cfg/ximea_80_calib.yaml b/UR_arm_camera_calibration-main/multi_calibration/cfg/ximea_80_calib.yaml
deleted file mode 100644
index cb81894..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/cfg/ximea_80_calib.yaml
+++ /dev/null
@@ -1,41 +0,0 @@
-%YAML:1.0
----
-# Primary blaser camera (with Magsafe)
-# model_type: MEI
-# camera_name: camera
-# image_width: 2448
-# image_height: 1840
-# mirror_parameters:
-# xi: 1.8516756874116116e+00
-# distortion_parameters:
-# k1: -8.1988658236129097e-01
-# k2: 6.3879257711668663e-02
-# p1: 8.5244705461647498e-03
-# p2: -8.1949134295897057e-03
-# projection_parameters:
-# gamma1: 3.3459441827828582e+03
-# gamma2: 3.3406052649685403e+03
-# u0: 1.2074996866814947e+03
-# v0: 8.7172321297820872e+02
-# width: 2448
-# height: 1840
-
-# Backup blaser camera (without Magsafe) with toolchanger
-model_type: MEI
-camera_name: camera
-image_width: 2448
-image_height: 1840
-mirror_parameters:
- xi: 2.2233477162
-distortion_parameters:
- k1: -0.8088886525
- k2: -0.8067357586
- p1: 0.0079433344
- p2: -0.0037897510
-projection_parameters:
- gamma1: 3811.1549321366
- gamma2: 3805.6811535143
- u0: 1221.7500158745
- v0: 892.4705799546
-width: 2448
-height: 1840
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/multi_calibration/include/README.md b/UR_arm_camera_calibration-main/multi_calibration/include/README.md
deleted file mode 100644
index c53ec92..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/include/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-BaseMotionPlanner Class: a base class template that provides low-level moveit interfacing and ROS communication for different kinds of calibration motion
-generateMotion(): Will generate motion plan for robot arm to perform the corresponding calibration motion based on given parameters, such as 4 sample per side for Camera calibration
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/multi_calibration/include/plannerFactory.hpp b/UR_arm_camera_calibration-main/multi_calibration/include/plannerFactory.hpp
deleted file mode 100644
index ab09203..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/include/plannerFactory.hpp
+++ /dev/null
@@ -1,18 +0,0 @@
-#include "trajectory_planner.hpp"
-
-class plannerFactory{
-
-public:
- enum calibType {
- camera_intrinsic,
- hand_eye,
- laser_cam
- };
- plannerFactory(std::string name_in,std::shared_ptr nh_);
- std::shared_ptr generatePlanner(calibType type);
-
-private:
- std::string group_name;
- std::shared_ptr nh;
- std::shared_ptr my_planner;
-};
diff --git a/UR_arm_camera_calibration-main/multi_calibration/include/trajectory_planner.hpp b/UR_arm_camera_calibration-main/multi_calibration/include/trajectory_planner.hpp
deleted file mode 100644
index 8897454..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/include/trajectory_planner.hpp
+++ /dev/null
@@ -1,93 +0,0 @@
-#include "ros/ros.h"
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include "std_msgs/String.h"
-#include
-
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-
-class BaseMotionPlanner {
-
-public:
- BaseMotionPlanner(std::string name_in,std::shared_ptr nh_);
- ~BaseMotionPlanner();
- geometry_msgs::Pose getCurrentPose();
-protected:
- std::string move_group_name;
- std::shared_ptr move_group;
- const robot_state::JointModelGroup* joint_model_group;
- std::shared_ptr my_plan;
- trajectory_processing::IterativeParabolicTimeParameterization iptp;
-
- std::shared_ptr nh;
-
- std::vector start_positions;
- geometry_msgs::Pose origin;
-
- virtual void generateMotion()=0;
- bool plan(std::vector &pose);
- bool execute();
- bool checkEERotation();
- bool goHomePosition();
- bool goJointPosition(std::vector jointGoals);
- double deg2rad(double deg){return (deg*M_PI/180.0);}
-};
-
-class CameraIntrinsicMotionPlanner : public BaseMotionPlanner{
-
-public:
- CameraIntrinsicMotionPlanner(std::string name_in,std::shared_ptr nh_);
-private:
- int samplePerPoint, pointPerSide, pointPerHeight;
- std::vector orientations;
- std::vector sidePositions;
- std::vector heightPositions;
-
- void generateMotion();
-};
-
-class HandEyeMotionPlanner : public BaseMotionPlanner{
-
-public:
- HandEyeMotionPlanner(std::string name_in,std::shared_ptr nh_);
-private:
- int numPointsPerVertArc, numVertArc, numRadius;
- std::vector vertDegreeList;
- std::vector horiDegreeList;
- std::vector radiusList;
-
- void generateMotion();
-};
-
-class LaserCamMotionPlanner : public BaseMotionPlanner{
-
-public:
- LaserCamMotionPlanner(std::string name_in, std::shared_ptr nh_);
-private:
- int numLinePerPlane, numPlaneAngles, numPlanePerCircle;
- std::vector vertDegreeList;
- std::vector horiDegreeList;
- std::vector radiusList;
-
- void generateMotion();
-};
diff --git a/UR_arm_camera_calibration-main/multi_calibration/launch/auto_calibration.launch b/UR_arm_camera_calibration-main/multi_calibration/launch/auto_calibration.launch
deleted file mode 100644
index 03745ca..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/launch/auto_calibration.launch
+++ /dev/null
@@ -1,44 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- cameraTopic: "/ximea_cam/image_raw"
-
-
-
-
-
-
-
-
-
-
- cameraTopic: "/blaser_cam/image_rect_color"
-
-
-
-
-
-
-
-
-
-
-
-
- cameraTopic: "/blaser_cam/image_rect_color"
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/multi_calibration/package.xml b/UR_arm_camera_calibration-main/multi_calibration/package.xml
deleted file mode 100644
index 1de51a7..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/package.xml
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
- multi_calibration
- 0.0.0
- The multi_calibration package
-
-
-
-
- Luyuan Wang
-
-
-
-
-
- TODO
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- catkin
- roscpp
- std_msgs
- sensor_msgs
- roscpp
- std_msgs
- sensor_msgs
- pcl_conversions
- pcl_ros
- geometry_msgs
- pluginlib
- eigen
- moveit_core
- moveit_ros_planning
- moveit_ros_planning_interface
- moveit_ros_perception
- interactive_markers
- geometric_shapes
- moveit_visual_tools
- rosbag
- tf2_ros
- tf2_eigen
- tf2_geometry_msgs
- image_transport
- message_generation
-
- panda_moveit_config
- pluginlib
- moveit_core
- moveit_commander
- moveit_fake_controller_manager
- moveit_ros_planning_interface
- moveit_ros_perception
- interactive_markers
- moveit_visual_tools
- joy
- pcl_ros
- pcl_conversions
- rosbag
- roscpp
- std_msgs
- sensor_msgs
- geometry_msgs
- pcl_conversions
- pcl_ros
- image_transport
- message_runtime
-
-
-
-
-
-
-
diff --git a/UR_arm_camera_calibration-main/multi_calibration/src/image_rect_node.cpp b/UR_arm_camera_calibration-main/multi_calibration/src/image_rect_node.cpp
deleted file mode 100644
index 6a7a33a..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/src/image_rect_node.cpp
+++ /dev/null
@@ -1,129 +0,0 @@
-/**
- * @file image_rect_node.cpp
- * @author Yuchen Wu (wuyc@umich.edu)
- * @brief A node undistorts camera image
- * @version 1.0
- * @date 2022-05-23
- *
- * @copyright Copyright (c) 2022
- *
- */
-
-#include "ros/ros.h"
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include "std_msgs/String.h"
-#include
-#include "camodocal/camera_models/CameraFactory.h"
-
-cv::Mat rect_map1_, rect_map2_, R_rect_cv_; // These matrices will be filled when node was started.
-image_transport::Publisher img_pub;
-ros::Publisher info_pub;
-sensor_msgs::CameraInfo info;
-
-/**
- * @brief A callback function for undistorting newly arrived image
- *
- * This function will be called for each time a new raw image is
- * published by the camera. Inside this function, the raw image will be
- * undistorted using given camera intrinsics and the rectified image
- * will be published as well.
- *
- * @param im_msg, the msg containing new raw image
- */
-void image_callback(const sensor_msgs::ImageConstPtr im_msg) {
- cv_bridge::CvImagePtr cv_ptr;
- cv_ptr = cv_bridge::toCvCopy(im_msg, sensor_msgs::image_encodings::BGR8);
- cv::Mat undistort;
-
- // This is the key function that performs the undistortion
- cv::remap(cv_ptr->image, undistort, rect_map1_, rect_map2_, cv::INTER_CUBIC);
-
- // Create a new msg for the rectified image
- cv_bridge::CvImagePtr cv_ptr2(new cv_bridge::CvImage);
-
- cv_ptr2->encoding = "bgr8";
- cv_ptr2->header.stamp = ros::Time::now();
- cv_ptr2->header.frame_id = cv_ptr->header.frame_id;
- cv_ptr2->image = undistort;
- img_pub.publish(cv_ptr2->toImageMsg());
-
- // Publish the undistorted image as well the **rectified** camera matrix
- info.header.stamp = cv_ptr2->header.stamp;
- info_pub.publish(info);
-}
-
-/**
- * @brief The main function that starts the node
- *
- * In this function, ONE subscriber and Two publishers will be set up.
- * The subscriber will keep monitor incoming raw camera image, the first
- * publisher will publish the rectified image while the second publisher
- * will publish the rectified camera Info for other component to use.
- *
- * @param argc
- * @param argv
- * @return int
- */
-int main(int argc, char** argv) {
- ros::init(argc, argv, "image_rect_node");
-
- // The path to the config is read from command line
- std::string configPath = argv[1];
-
- // Initialize the ROS publishers and subscribers
- ros::NodeHandle nh_;
- ros::Subscriber img_sub = nh_.subscribe("/ximea_cam/image_raw",1,image_callback);
- info_pub = nh_.advertise("/blaser_cam/camera_info",1);
- image_transport::ImageTransport it_(nh_);
- img_pub = it_.advertise("/blaser_cam/image_rect_color",1);
-
- // Generate a camera object to initialize the rect_map in later steps
- camodocal::CameraPtr m_camera_ = camodocal::CameraFactory::instance()->generateCameraFromYamlFile(configPath);
-
- R_rect_cv_ = cv::Mat::eye(3, 3, CV_32F);
- R_rect_cv_.at(0, 0) = 0.2;
- R_rect_cv_.at(1, 1) = 0.2;
-
- cv::Mat K_rect;
- K_rect = m_camera_->initUndistortRectifyMap(rect_map1_, rect_map2_, -1.0, -1.0,
- cv::Size(0,0), -1.0, -1.0, R_rect_cv_);
-
- // Prepare a rectified camera Info for future publish
- std::vector cameraMatrix;
- double fx,fy,cx,cy;
-
- if(nh_.hasParam("rectCameraMatrix")){
- nh_.getParam("rectCameraMatrix",cameraMatrix);
- fx = cameraMatrix[0];
- fy = cameraMatrix[1];
- cx = cameraMatrix[2];
- cy = cameraMatrix[3];
- }
- else{
- ROS_ERROR("Missing camera matrix, exiting");
- ros::shutdown();
- }
-
- info.header.frame_id = "tool0";
- info.width = 2448;
- info.height = 1840;
- info.distortion_model = "plumb_bob";
- info.D = std::vector {0,0,0,0,0};
- info.K = boost::array {fx,0,cx,0,fy,cy,0,0,1};
- info.P = boost::array {fx,0,cx,0,0,fy,cy,0,0,0,1,0};
-
- while (true) {
- ros::spinOnce();
- }
- return 0;
-}
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/multi_calibration/src/image_saving_node.cpp b/UR_arm_camera_calibration-main/multi_calibration/src/image_saving_node.cpp
deleted file mode 100644
index 497ed4c..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/src/image_saving_node.cpp
+++ /dev/null
@@ -1,102 +0,0 @@
-/**
- * @file image_saving_node.cpp
- * @author Yuchen Wu (wuyc@umich.edu)
- * @brief A node that saves image from camera
- * @version 1.0
- * @date 2022-05-20
- *
- * @copyright Copyright (c) 2022
- *
- */
-
-#include "ros/ros.h"
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include "std_msgs/String.h"
-
-bool flag = false;
-std::string dir;
-int imgCount = 0;
-
-/**
- * @brief A callback function for status confirmation
- *
- * This function will turn on the image saving flag when it
- * receives confirmation indicating a pose has been reached by
- * the robot.
- *
- * @param msg
- */
-void status_callback(const std_msgs::String::ConstPtr& msg){
- flag = true;
- ROS_INFO("Image saving node received confirmation!");
-}
-
-/**
- * @brief A call backfunction for new image
- *
- * This function will only proceed if image saving flag is set.
- * If set, this function will save the image to the directory
- * passed in from ROS parameter server.
- *
- * The confirmation mechanism is to ensure that image is only being
- * saved when the robot has reached a designated pose and is temporarily
- * stationary to guarantee image quality.
- *
- *
- * @param im_msg
- */
-void image_callback(const sensor_msgs::ImageConstPtr im_msg) {
- if(flag){
- flag = false;
-
- cv_bridge::CvImagePtr cv_ptr;
- cv_ptr = cv_bridge::toCvCopy(im_msg, sensor_msgs::image_encodings::BGR8);
-
- std::string fileName = dir + "/left-" + std::to_string(imgCount) + ".png";
- imwrite(fileName,cv_ptr->image);
- ROS_INFO("Image %d saved successfully!",imgCount++);
- }
-}
-
-/**
- * @brief A main function that starts the node
- *
- * This function will set up TWO subscriber, one for checking
- * confirmation from trajectory planner and the other one for
- * saving image, if confirmation has been received.
- *
- *
- * @param argc
- * @param argv
- * @return int
- */
-int main(int argc, char** argv) {
- ros::init(argc, argv, "image_saving_node");
- ros::NodeHandle nh_;
-
- dir = argv[1];
- std::string topicName;
- if(nh_.hasParam("cameraTopic")){
- nh_.getParam("cameraTopic",topicName);
- }
- else{
- ROS_ERROR("Missing camera topic name, exiting");
- ros::shutdown();
- }
-
- ros::Subscriber img_sub = nh_.subscribe(topicName,10,image_callback);
- ros::Subscriber status_sub = nh_.subscribe("execution_status",10,status_callback);
-
- while (true) {
- ros::spinOnce();
- }
- return 0;
-}
diff --git a/UR_arm_camera_calibration-main/multi_calibration/src/plannerFactory.cpp b/UR_arm_camera_calibration-main/multi_calibration/src/plannerFactory.cpp
deleted file mode 100644
index 8a8a8d6..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/src/plannerFactory.cpp
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * @file plannerFactory.cpp
- * @author Yuchen Wu (wuyc@umich.edu)
- * @brief A planner factory for generating multiple motion planner
- * @version 1.0
- * @date 2022-05-29
- *
- * @copyright Copyright (c) 2022
- *
- */
-#include "plannerFactory.hpp"
-
-/**
- * @brief Construct a new planner Factory::planner Factory object
- *
- * @param name_in pass in the MoveIt! manipulator group name
- * @param nh_ pass in a ROS node handle to the class
- *
- * The above params is stored inside the class and will be used
- * when generating new motion planner
- */
-plannerFactory::plannerFactory(std::string name_in,std::shared_ptr nh_) : \
- group_name(name_in), nh(nh_) {}
-
-/**
- * @brief a function generate corresponding motion planner from the factory
- *
- * @param type, the type of motion planner that is being requested
- * @return std::shared_ptr, a pointer of the generated planner
- */
-std::shared_ptr plannerFactory::generatePlanner(calibType type){
- switch(type){
- case(camera_intrinsic):
- my_planner = std::make_shared (group_name,nh);
- break;
- case(hand_eye):
- my_planner = std::make_shared (group_name,nh);
- break;
- case(laser_cam):
- my_planner = std::make_shared (group_name,nh);
- break;
- default:
- ROS_ERROR("Undefined planner type");
- break;
- }
- return my_planner;
-}
\ No newline at end of file
diff --git a/UR_arm_camera_calibration-main/multi_calibration/src/trajectory_planner.cpp b/UR_arm_camera_calibration-main/multi_calibration/src/trajectory_planner.cpp
deleted file mode 100644
index 36cf83d..0000000
--- a/UR_arm_camera_calibration-main/multi_calibration/src/trajectory_planner.cpp
+++ /dev/null
@@ -1,645 +0,0 @@
-/**
- * @file trajectory_planner.cpp
- * @author Yuchen Wu (wuyc@umich.edu)
- * @brief The implemenation for multiple motion planner
- * @version 1.1
- * @date 2022-08-15
- *
- * @copyright Copyright (c) 2022
- *
- */
-#include "trajectory_planner.hpp"
-MOVEIT_CLASS_FORWARD(MoveGroupInterface);
-MOVEIT_CLASS_FORWARD(Plan);
-
-/**
- * @brief Construct a new Base Motion Planner:: Base Motion Planner object
- *
- * @param name_in pass in the MoveIt! manipulator group name
- * @param nh_ pass in a ROS node handle to the class
- *
- * In this constructor,
- * 1) The class will collect necessary parameters from ROS parameter server
- * 2) It will initialize the robot to its home position
- */
-BaseMotionPlanner::BaseMotionPlanner(std::string name_in,std::shared_ptr nh_){
- move_group_name = name_in;
- move_group = std::make_shared (move_group_name);
- joint_model_group = move_group->getCurrentState()->getJointModelGroup(move_group_name);
- my_plan = std::make_shared();
-
- nh = nh_;
-
- if(nh->hasParam("homePositions")){
- nh->getParam("homePositions",start_positions);
- }
- else{
- ROS_ERROR("Missing home position joint values, exiting");
- ros::shutdown();
- }
-
-
- if(!goHomePosition()){
- ROS_ERROR("Error in initializing Robot's state");
- }
- else{
- ROS_INFO("Motion Planner successfully initializied");
- }
-
- if(nh->hasParam("bedOrigin")){
- std::vector origin_in;
- nh->getParam("bedOrigin",origin_in);
- origin.position.x = origin_in[0];
- origin.position.y = origin_in[1];
- origin.position.z = origin_in[2];
- }
- else{
- ROS_ERROR("Missing bed origin coordinates, exiting");
- ros::shutdown();
- }
-
-
-}
-
-/**
- * @brief Destroy the Base Motion Planner:: Base Motion Planner object
- *
- * This destructor will move the robot back to home position and gives
- * a message indicating successful completion
- */
-BaseMotionPlanner::~BaseMotionPlanner(){
- if(goHomePosition()){
- ROS_INFO("Trajectory successfully finished, press Ctrl+C to exit!\n");
- }
-}
-
-/**
- * @brief a get function for current pose information
- *
- * @return the robot's current pose
- */
-geometry_msgs::Pose BaseMotionPlanner::getCurrentPose(){
- return move_group->getCurrentPose().pose;
-}
-
-/**
- * @brief a plan function for the Base Motion Planner
- *
- * Note: this funciton will only give the motion plan but will NOT execute it
- *
- * @param waypoints, which specifies the path that robot will go to
- * @return true if the motion plan is found successfully
- * @return false if error occurs when finding motion plan
- */
-bool BaseMotionPlanner::plan(std::vector &waypoints){
- moveit_msgs::RobotTrajectory trajectory_msg;
- move_group->setPlanningTime(10.0);
-
- double fraction = move_group->computeCartesianPath(waypoints,
- 0.01, // eef_step
- 0.0, // jump_threshold
- trajectory_msg, true);
-
- // The trajectory needs to be modified so it will include velocities as well.
- // First to create a RobotTrajectory object
- robot_trajectory::RobotTrajectory rt(move_group->getCurrentState()->getRobotModel(), "manipulator");
-
- // Second get a RobotTrajectory from trajectory
- rt.setRobotTrajectoryMsg(*move_group->getCurrentState(), trajectory_msg);
-
- // Third compute computeTimeStamps
- bool success = iptp.computeTimeStamps(rt);
-
- // Get RobotTrajectory_msg from RobotTrajectory
- rt.getRobotTrajectoryMsg(trajectory_msg);
-
- // Finally plan and execute the trajectory
- my_plan->trajectory_ = trajectory_msg;
-
- return success;
-}
-
-/**
- * @brief an execute function for the Base Motion Planner
- *
- * This function will execute the previous found motion plan, stored in
- * this->my_plan
- * Note: Please call the execute function after calling the plan function.
- *
- * @return true if the execute is successful
- * @return false if error reported during execution
- */
-bool BaseMotionPlanner::execute(){
- moveit::core::MoveItErrorCode result = move_group->execute(*my_plan);
- // move_group->stop();
-
- if(result != moveit::core::MoveItErrorCode::SUCCESS){
- return false;
- }
- else{
- return checkEERotation();
- }
-}
-
-/**
- * @brief a helper function to make sure the End Effector does not reach joint limit
- *
- * This function will actively check the rotation in Wrist3, a.k.a the end effector rotation
- *
- * If the rotation is greater +/- 1.2 * pi, it will perform a correction by 2*pi towards
- * the other end of limit.
- *
- * @return true if no correction is needed or correction performed successfully
- * @return false if correction cannot be performed
- */
-bool BaseMotionPlanner::checkEERotation(){
- //We need to make sure EE joint is always away from [-2pi,2pi]
- //So correction is necessary
- bool correctionResult = true;
- moveit::core::RobotStatePtr current_state = move_group->getCurrentState();
-
- std::vector joint_group_positions;
- current_state->copyJointGroupPositions(joint_model_group, joint_group_positions);
-
- if(abs(joint_group_positions[5]) > 1.2*M_PI){
- ros::Publisher status_pub = nh->advertise("execution_status", 1000);
-
- std_msgs::String msg2;
- std::stringstream end;
- end<<"End"<<" "< 0){
- step = -2*M_PI;
- }
- else{
- step = 2*M_PI;
- }
-
- joint_group_positions[5] += step;
- move_group->setJointValueTarget(joint_group_positions);
- correctionResult = (move_group->plan(*my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
- correctionResult = (move_group->execute(*my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
- move_group->stop();
-
- if(correctionResult){
- ROS_INFO("EE joint correction successful\n");
- }
- else{
- ROS_WARN("EE joint correction UNsuccessful\n");
- current_state->copyJointGroupPositions(joint_model_group, joint_group_positions);
- }
- ROS_INFO("Current end effector orientation: %f \n",joint_group_positions[5]);
-
- std_msgs::String msg;
- std::stringstream start;
- start<<"Start"<<" "< jointGoals){
- bool result = false;
- move_group->setJointValueTarget(jointGoals);
- result = (move_group->plan(*my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
- result = (move_group->execute(*my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
- return result;
-}
-
-/**
- * @brief Construct a new Camera Calibration Motion Planner Object
- *
- * This is a derived class from Base Motion Planner
- *
- * @param name_in pass in the MoveIt! manipulator group name
- * @param nh_ pass in a ROS node handle to the class
- *
- * The param passed in will be passed to Base Motion Planner
- *
- * In this constructor,
- * 1) The class will collect necessary parameters from ROS parameter server
- * 2) It will initialize the robot to its home position
- * Camera calibration specific:
- * 3) Generate some poses candidates for collect photos
- * based on parameter settings
- * 4) generateMotion() will be called and motion will be executed
- */
-CameraIntrinsicMotionPlanner::CameraIntrinsicMotionPlanner(std::string name_in,std::shared_ptr nh_) \
-: BaseMotionPlanner(name_in,nh_){
- nh->getParam("pointPerSide",pointPerSide);
- nh->getParam("pointPerHeight",pointPerHeight);
-
- //Prepare quaternions, 6 different orientation per point
- double roll = deg2rad(-90), pitch = deg2rad(-20);
- Eigen::Quaterniond q1 = Eigen::AngleAxisd(roll, Eigen::Vector3d::UnitX())
- * Eigen::AngleAxisd(pitch, Eigen::Vector3d::UnitY());
- orientations.push_back(q1);
-
- Eigen::Quaterniond q2 = q1 * Eigen::AngleAxisd(deg2rad(45),Eigen::Vector3d::UnitZ());
- orientations.push_back(q2);
-
- Eigen::Quaterniond q3 = q1 * Eigen::AngleAxisd(deg2rad(-45),Eigen::Vector3d::UnitZ());
- orientations.push_back(q3);
-
- Eigen::Quaterniond q4 = q1 * Eigen::AngleAxisd(deg2rad(30),Eigen::Vector3d::UnitX());
- orientations.push_back(q4);
-
- Eigen::Quaterniond q5 = q1 * Eigen::AngleAxisd(deg2rad(-30),Eigen::Vector3d::UnitX());
- orientations.push_back(q5);
-
- Eigen::Quaterniond q6(Eigen::AngleAxisd(roll, Eigen::Vector3d::UnitX()));
- orientations.push_back(q6);
-
- //Prepare x and z positions
- double start1 = 0.06, end1 = 0;
- double step1 = (end1-start1)/(pointPerSide-1);
- double temp1 = start1;
- do{
- sidePositions.push_back(temp1);
- temp1 += step1;
- }
- while(temp1 > end1);
- sidePositions.push_back(end1);
- for(int i = 0; i < 3; i++){
- std::cout<advertise("execution_status", 1000);
- int totalPoints = pointPerHeight*pointPerSide*pointPerSide*6;
- int currentPoint = 0;
-
- for(int i = 0; i < pointPerHeight; i++){
- for(int j = 0; j < pointPerSide; j++){
- for(int k = 0; k < pointPerSide; k++){
- for(int l = 0; l < 6; l++){
-
- geometry_msgs::Pose nextPose = origin;
- nextPose.position.x += sidePositions[j];
- nextPose.position.y -= heightPositions[i];
- nextPose.position.z += sidePositions[k];
-
- tf::quaternionEigenToMsg(orientations[l],nextPose.orientation);
-
- std::vector waypoints;
- // waypoints.push_back(move_group->getCurrentPose().pose);
- waypoints.push_back(nextPose);
-
- currentPoint++;
-
- if(plan(waypoints) && execute()){
- //Send message to aruco node to let it know it's time to take pictures and tf pairs
- std_msgs::String msg;
-
- std::stringstream ss;
- ss << "Pose successfully reached, total progress " << (currentPoint)/double(totalPoints)*100.0 <<"%.";
-
- msg.data = ss.str();
- ROS_INFO("%s", msg.data.c_str());
- status_pub.publish(msg);
-
- ros::Duration(0.5).sleep();
-
- listener.lookupTransform("/base_link", "/tool0", ros::Time(0), tf_e2b);
- tf::Vector3 armv = tf_e2b.getOrigin();
- tf::Quaternion armq = tf_e2b.getRotation();
-
- std::ofstream out;
- out.open(filePath, std::ios_base::app);
-
- out << armv.getX() << "," << armv.getY() << "," << armv.getZ() << ",";
- out << armq.getW() << "," << armq.getX() << "," << armq.getY() << "," << armq.getZ() << std::endl;
- }
- }
- }
- }
- }
-
-}
-
-/**
- * @brief Construct a new Handeye Calibration Motion Planner Object
- *
- * This is a derived class from Base Motion Planner
- *
- * @param name_in pass in the MoveIt! manipulator group name
- * @param nh_ pass in a ROS node handle to the class
- *
- * The param passed in will be passed to Base Motion Planner
- *
- * In this constructor,
- * 1) The class will collect necessary parameters from ROS parameter server
- * 2) It will initialize the robot to its home position
- * Handeye calibration specific:
- * 3) Generate some angles candidates for collecting photos
- * based on parameters passed in
- * 4) generateMotion() will be called and motion will be executed
- */
-HandEyeMotionPlanner::HandEyeMotionPlanner(std::string name_in,std::shared_ptr nh_)\
-: BaseMotionPlanner(name_in,nh_){
- //Collect Parameters
- nh->getParam("numPointsPerVertArc",numPointsPerVertArc);
- nh->getParam("numVertArc",numVertArc);
- nh->getParam("numRadius",numRadius);
-
- //Prepare the vertical degree lists
- double startAngle1 = 35.0, endAngle1 = 135.0;
- double step1 = (endAngle1-startAngle1)/(numPointsPerVertArc-1);
- double temp1 = startAngle1;
- do{
- vertDegreeList.push_back(temp1);
- temp1 += step1;
- }
- while(temp1 < endAngle1);
- vertDegreeList.push_back(endAngle1);
-
- //Prepare the horizontal degree lists
- double startAngle2 = 0.0, endAngle2 = 180.0;
- double step2 = (endAngle2-startAngle2)/(numVertArc/2);
- double temp2 = startAngle2;
- do{
- horiDegreeList.push_back(temp2);
- horiDegreeList.push_back(temp2+180);
- temp2 += step2;
- }
- while(temp2 < endAngle2);
-
- //Prepare the radius lists
- double startRadius = 0.17, endRadius = 0.18;
- double step3 = (endRadius-startRadius)/(numRadius-1);
- double temp3 = startRadius;
- do{
- radiusList.push_back(temp3);
- temp3 += step3;
- }
- while(temp3 < endRadius);
- radiusList.push_back(endRadius);
-
- generateMotion();
-}
-
-/**
- * @brief This function will generate & execute motion plan for handeye calibration
- *
- * This will generate motion plan that connects generated viewangles and order
- * image_saving_node to save images when each pose is reached
- *
- *
- * Note: This method is called inside the constructor
- *
- */
-void HandEyeMotionPlanner::generateMotion(){
- ros::Publisher status_pub = nh->advertise("execution_status", 1000);
- int totalPoints = numRadius*numVertArc*numPointsPerVertArc;
- int currentPoint = 0;
-
- ROS_INFO("Press Enter to start...\n");
- std::cin.get();
-
- for(int u = 0; u < numRadius; u++){
- for(int i = 0; i < numVertArc; i++){
- ROS_INFO("Starting new arc %f of radius %f\n",horiDegreeList[i],radiusList[u]);
-
- std_msgs::String msg2;
- std::stringstream end;
- end<<"End"<<" "< waypoints;
- waypoints.push_back(nextPose);
-
- currentPoint++;
-
- if(j == 1){
- std_msgs::String msg;
- std::stringstream start;
- start<<"Start"<<" "< nh_)\
- : BaseMotionPlanner(name_in,nh_){
-
- //Collect Parameters
- nh->getParam("numLinePerPlane",numLinePerPlane);
- nh->getParam("numPlaneAngles",numPlaneAngles);
- nh->getParam("numPlanePerCircle",numPlanePerCircle);
-
- origin.position.y -= 0.15;
-
- //Prepare the vertical degree lists
- double startAngle1 = 10.0, endAngle1 = 20.0;
- double step1 = (endAngle1-startAngle1)/(numPlaneAngles);
- double temp1 = startAngle1;
- do{
- vertDegreeList.push_back(temp1);
- temp1 += step1;
- }
- while(temp1 < endAngle1);
-
- //Prepare the horizontal degree lists
- double startAngle2 = 0.0, endAngle2 = 360.0;
- double step2 = (endAngle2-startAngle2)/(numPlanePerCircle);
- double temp2 = startAngle2;
- do{
- horiDegreeList.push_back(temp2);
- temp2 += step2;
- }
- while(temp2 < endAngle2);
-
- //Prepare the radius lists
- double startRadius = 0, endRadius = 0.07;
- double step3 = (endRadius-startRadius)/(numLinePerPlane-1);
- double temp3 = startRadius;
- do{
- radiusList.push_back(temp3);
- temp3 += step3;
- }
- while(temp3 < endRadius);
- radiusList.push_back(endRadius);
-
- generateMotion();
-}
-
-/**
- * @brief This function will generate & execute motion plan for Laser-Camera calibration
- *
- * This will generate motion plan that connects generated viewangles and order
- * image_saving_node to save images when each pose is reached
- *
- *
- * Note: This method is called inside the constructor
- *
- */
-void LaserCamMotionPlanner::generateMotion(){
- ros::Publisher status_pub = nh->advertise("execution_status", 1000);
- int totalPoints = numLinePerPlane * numPlaneAngles * numPlanePerCircle;
- int currentPoint = 0;
-
- ROS_INFO("Press Enter to start...\n");
- std::cin.get();
-
- for(int u = 0; u < numPlanePerCircle; u++){
- for(int i = 0; i < numPlaneAngles; i++){
- ROS_INFO("Starting new arc %f of radius %f\n",horiDegreeList[i],radiusList[u]);
- for(int j = 0; j < numLinePerPlane; j++){
- geometry_msgs::Pose nextPose = origin;
- nextPose.position.x += radiusList[j]*sin(deg2rad(vertDegreeList[i]))*cos(deg2rad(horiDegreeList[u]));
- nextPose.position.y -= radiusList[j]*cos(deg2rad(vertDegreeList[i]));
- nextPose.position.z -= radiusList[j]*sin(deg2rad(vertDegreeList[i]))*sin(deg2rad(horiDegreeList[u]));
-
- double roll = deg2rad(-90), pitch = deg2rad(vertDegreeList[i] - 20), yaw = deg2rad(horiDegreeList[u]);
- Eigen::Quaterniond q;
- q = Eigen::AngleAxisd(roll, Eigen::Vector3d::UnitX())
- * Eigen::AngleAxisd(yaw, Eigen::Vector3d::UnitZ())
- * Eigen::AngleAxisd(pitch, Eigen::Vector3d::UnitY());
-
- tf::quaternionEigenToMsg(q,nextPose.orientation);
-
- std::vector waypoints;
- waypoints.push_back(nextPose);
-
- currentPoint++;
-
- if(plan(waypoints) && execute()){
- ros::Duration(1).sleep();
-
- //Send message to aruco node to let it know it's time to take pictures and tf pairs
- std_msgs::String msg;
-
- std::stringstream ss;
- ss << "Distance "<());
- plannerFactory::calibType type;
-
- if(atoi(argv[1]) == 0){
- type = plannerFactory::camera_intrinsic;
- std::shared_ptr planner = factory.generatePlanner(type);
- }
- else if(atoi(argv[1]) == 1){
- type = plannerFactory::hand_eye;
- std::shared_ptr planner = factory.generatePlanner(type);
- }
- else if(atoi(argv[1]) == 2){
- type = plannerFactory::laser_cam;
- std::shared_ptr planner = factory.generatePlanner(type);
- }
-
- return 0;
-}
-