Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init_poses[pose_idx], gt_poses[pose_idx] IndexError: index 2 is out of bounds for dimension 0 with size 2 #6

Open
jiangyijin opened this issue Aug 5, 2024 · 0 comments

Comments

@jiangyijin
Copy link

Hello, thank you very much for the work you've done. I encountered a problem while running it; could you please tell me what this issue is and if there is a way to resolve it?
(upnerf) ubuntu@ml-ubuntu20-04-desktop-v1-0-108gb-100m:/data/up_nerf/UP-NeRF$ python train.py --config configs/custom.yaml
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=AlexNet_Weights.IMAGENET1K_V1. You can also use weights=AlexNet_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/alexnet-owt-7be5be79.pth" to /home/ubuntu/.cache/torch/hub/checkpoints/alexnet-owt-7be5be79.pth
100%|█████████████████████████████████████████████| 233M/233M [00:28<00:00, 8.43MB/s]
Loading model from: /home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/lpips/weights/v0.1/alex.pth
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 2
wandb: You chose 'Use an existing W&B account'
wandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit:
wandb: Appending key for api.wandb.ai to your netrc file: /home/ubuntu/.netrc
wandb: Tracking run with wandb version 0.17.5
wandb: Run data is saved locally in ./wandb/run-20240805_211648-qj7uf3qy
wandb: Run wandb offline to turn off syncing.
wandb: Syncing run UP-NeRF
wandb: ⭐️ View project at https://wandb.ai/1uuuu-/custom_pose_optimize
wandb: 🚀 View run at https://wandb.ai/1uuuuu-/custom_pose_optimize/runs/qj7uf3qy
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 4090') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
Epoch 21: 27%|▎| 39/145 [00:15<00:41, 2.53it/s, v_num=f3qy, train/l_depth_c=4.65e-6, tpose alignment is not converged
Traceback (most recent call last):
File "train.py", line 91, in
main(parse_args(parser))
File "train.py", line 79, in main
trainer.fit(system, ckpt_path=hparams["resume_ckpt"])
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1103, in _run
results = self._run_stage()
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1182, in _run_stage
self._run_train()
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1205, in _run_train
self.fit_loop.run()
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 213, in advance
batch_output = self.batch_loop.run(kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 90, in advance
outputs = self.manual_loop.run(kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/manual_loop.py", line 110, in advance
training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1485, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 378, in training_step
return self.model.training_step(*args, **kwargs)
File "/data/up_nerf/UP-NeRF/models/nerf_system.py", line 221, in training_step
self.log_pose()
File "/home/ubuntu/anaconda3/envs/upnerf/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/up_nerf/UP-NeRF/models/nerf_system.py", line 441, in log_pose
init_poses[pose_idx], gt_poses[pose_idx]
IndexError: index 2 is out of bounds for dimension 0 with size 2
wandb: / 37.613 MB of 37.613 MB uploaded
wandb: Run history:
wandb: epoch ▁▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇██
wandb: lr ████▇▇▇▇▇▆▆▆▆▆▆▅▅▅▅▅▄▄▄▄▄▄▃▃▃▃▃▂▂▂▂▂▂▁▁▁
wandb: lr_pose ████▇▇▇▇▇▆▆▆▆▆▆▅▅▅▅▅▄▄▄▄▄▄▃▃▃▃▃▂▂▂▂▂▂▁▁▁
wandb: train/l_depth_c █▄▃▂▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/l_depth_f █▄▄▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/l_feat_c ██▇▇▇▄▄▃▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/l_feat_f ██▇▆█▄▄▃▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/loss █▇▆▅▆▃▃▂▂▂▂▂▂▁▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/psnr ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: trainer/global_step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb: val/loss █▅▄▄▃▃▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: val/psnr ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb:
wandb: Run summary:
wandb: epoch 21
wandb: lr 0.00049
wandb: lr_pose 0.00195
wandb: train/l_depth_c 0.0
wandb: train/l_depth_f 0.0
wandb: train/l_feat_c 0.00011
wandb: train/l_feat_f 0.00011
wandb: train/loss 0.00024
wandb: train/psnr 0.0
wandb: trainer/global_step 2995
wandb: val/loss 0.00021
wandb: val/psnr 0.0
wandb:
wandb: 🚀 View run UP-NeRF at: https://wandb.ai/1uuuuu-/custom_pose_optimize/runs/qj7uf3qy
wandb: ⭐️ View project at: https://wandb.ai/1uuuuu-/custom_pose_optimize
wandb: Synced 6 W&B file(s), 774 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20240805_211648-qj7uf3qy/logs
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with wandb.require("core")! See https://wandb.me/wandb-core for more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant