Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get transform data with 3d point cloud? #121

Open
WeiWenQiang123456 opened this issue Jan 18, 2023 · 0 comments
Open

How to get transform data with 3d point cloud? #121

WeiWenQiang123456 opened this issue Jan 18, 2023 · 0 comments

Comments

@WeiWenQiang123456
Copy link

The point cloud is obtained by modeling using oblique photography.Due to the large number of point clouds, the point cloud is extracted by voxel segmentation through octree. Each voxel is the smallest research unit, which contains a lot of point clouds.

The voxel is equivalent to a picture and contains color, three-dimensional coordinates, roughness, and other spatial features. Each voxel is written in the database, and a row contains 89 features.In order to convert a row of one-dimensional data into a two-dimensional picture, since the maximum value is a color value of 256, it is transformed into a matrix of 256 by 256 by normalization and standardization, representing a voxel.

In IIC, we know that transforming data is important to us. My question is whether I should first extract features from the point cloud by color and rotation transformation or first extract features and then transform the matrix of 256 times 256.In IIC, we know that transforming data is important to us. My question is whether I should first extract features from the point cloud by color and rotation transformation or first extract features and then transform the matrix of 256 times 256.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant