Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnboundLocalError: local variable 'loss' referenced before assignment #16

Open
TousenKaname opened this issue Mar 13, 2024 · 2 comments

Comments

@TousenKaname
Copy link

Thanks for your excellent work! But I encounter a bug when I set the 'use_pseudo_label=False'.

Traceback (most recent call last):
  File "/nvme/gawang/miniconda3/envs/Segvol/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/nvme/gawang/git_repo/SegVol/train.py", line 193, in main_worker
    epoch_loss, iter_num = train_epoch(args, segvol_model, train_dataloader, optimizer, scheduler, epoch, rank, gpu, iter_num)
  File "/nvme/gawang/git_repo/SegVol/train.py", line 91, in train_epoch
    loss_step_avg += loss.item()
UnboundLocalError: local variable 'loss' referenced before assignment

The loss is not defined...

@TousenKaname
Copy link
Author

And I found that 'args.use_pseudo_label' only affect the loss calculate, it means we also have to dounsupervised_forward method, consuming computing waste...

@Yuxin-Du-Lab
Copy link
Collaborator

I'm sorry for that. The args.use_pseudo_label is still unfinished. The original released code only supports training with pseudo labels.

But you can disable the pseudo-label training by discarding all the code related to the pseudo-label in the training process. SegVol will perform better training with only ground truth labels in specific categories.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants