Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input dimensions and feature handling issues in S3DIS dataset model training #255

Open
gitKincses opened this issue Sep 11, 2024 · 0 comments

Comments

@gitKincses
Copy link

Hi! I'm training a model using the S3DIS dataset, and I have some questions regarding input dimensions and features.

  1. What does in_features_dim represent? This value is set to 5, but I'm unsure what it includes, considering the S3DIS dataset contains 6 features: x, y, z, red, green, and blue.
  2. On line 167 of trainer.py, the line for batch in training_loader: retrieves a batch.features tensor with the shape (61700, 5), which is then cloned into x in architectures.py. However, I can't understand how the batches are created, as I can't find any tensor in the training_loader with the shape (*, 5).
  3. I've been debugging to understand what values I need to modify to adapt the code for the DALES dataset, which contains x, y, z, intensity. However, I haven't been able to make the correct changes, and I encounter the following error:
    File "/home/hqu/KPConv-PyTorch/models/blocks.py", line 372, in forward
    kernel_outputs = torch.matmul(weighted_features, self.weights)
    RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [15, 3] but got: [15, 4].
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant