Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some warnings during training #3

Open
MichaelChen147 opened this issue Jun 30, 2024 · 0 comments
Open

some warnings during training #3

MichaelChen147 opened this issue Jun 30, 2024 · 0 comments

Comments

@MichaelChen147
Copy link

Thanks for your great work!
I have some questions.

/home/usr23/wenyichen/miniconda3/envs/open-universe/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "
[W reducer.cpp:1300] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
/home/usr23/wenyichen/miniconda3/envs/open-universe/lib/python3.10/site-packages/torch/autograd/init.py:200: UserWarning: reflection_pad1d_backward_out_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343967769/work/aten/src/ATen/Context.cpp:71.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass

Does anyone know how to solve this issue?
And when I execute the command "nohup python ./train.py experiment=universepp_vb_16k > VBD.log 2>&1 &", there is no information.
How to show it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant