Is it possible to use opt-sdp-attention instead of xformers? #293
-
Hi, would it be possible to use opt-sdp-attention optimisation instead of xformers? Ive switched to torch2 and everything is running fine, however I do get better performance using opt-sdp-attention, would it be relatively easy to change comfyui to use it as well. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 6 replies
-
Yes, the command line option to use it is: --use-pytorch-cross-attention The reason I don't really advertise it is because according to tests performance is worse than xformers for most people at higher resolutions where speed is usually more important than at lower resolutions. |
Beta Was this translation helpful? Give feedback.
-
do i add this argument when I run main.py? Like so? py main.py --use-pytorch-cross-attention If this is the case I did this and noticed zero difference in image and the iterations/s were identical. |
Beta Was this translation helpful? Give feedback.
-
i'm also curious if this is how i enable flash_attn... cos innitially i thought simply installing flash_attn i shaved off more than 60% of time waiting... but... when i tried to reinstall a fresh new install of comfyui... i am actually running many times faster than before suggesting that i did the original installation badly and messed up. but not sure where i did wrong... |
Beta Was this translation helpful? Give feedback.
Yes, the command line option to use it is: --use-pytorch-cross-attention
The reason I don't really advertise it is because according to tests performance is worse than xformers for most people at higher resolutions where speed is usually more important than at lower resolutions.