Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use pytorch built-in SiLU function to save GPU memory usage #331

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

pppoe
Copy link

@pppoe pppoe commented Oct 5, 2023

  • Replaced the custom implemented nonlinearity function with the PyTorch built-in function SiLU.
  • The built-in function saves significant GPU memory when specifying a large output size.
  • According to PyTorch documentation, the formulation/result is identical.

On my 12GB GPU card, the following command gives a GPU OOM error without this PR while runs through with it.

python3 scripts/txt2img.py --H 1024 --W 1024 --plms --ckpt /data/models/SD/v2-1_512-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference.yaml  --device cuda --prompt "pytorch logo" --n_sample 1  --n_iter 1

Tested under the default environment requirements. SiLU function has been available in PyTorch after 1.7.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant