Getting 3D volumetric information for an object is essential in applications ranging from autonomous manufactur- ing to robotic scene perception. In order to get 3D volumetric information for an object, RGB-D sensors are widely used to capture depth information. To reconstruct 3D volumetric infor- mation of an object, this paper designed an extended generative adversary network (GAN) with a recurrent generator. The model can take a single or a sequence of depth scans of an object to reconstruct the 3D volumetric model of the object. In precise, 3D long short-term memory (LSTM) units that are employed in the generator can extract features from the sequence of depth scans in different time steps. The reconstructed results of the proposed model are evaluated by calculating intersection over union (IoU) in both 3D space and 2D projection. The model achieved 77.71% in IoU, 80.08% in hit rate, and 97.45% in accuracy, which outperformed other methods. Some 3D Reconstruction Results are shown below.
-
Notifications
You must be signed in to change notification settings - Fork 1
Getting 3D volumetric information for an object is essential in applications ranging from autonomous manufactur- ing to robotic scene perception. In order to get 3D volumetric information for an object, RGB-D sensors are widely used to capture depth information. To reconstruct 3D volumetric infor- mation of an object, this paper designed an exte…
FujianYan/R-GAN
About
Getting 3D volumetric information for an object is essential in applications ranging from autonomous manufactur- ing to robotic scene perception. In order to get 3D volumetric information for an object, RGB-D sensors are widely used to capture depth information. To reconstruct 3D volumetric infor- mation of an object, this paper designed an exte…
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published