You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This has a speed overhead increase of about 15% from the naive implementation at 512x512, although this goes up as array sizes increase. The memory overhead reduction is significant, and can allow for much larger image generation.
The text was updated successfully, but these errors were encountered:
I needed r to be a float or int, representing a percentage or the total number of merged tokens. This makes it easier to set r.
To insert del in some tensors, like a,b and score. I have seen floating around in some SD implementations. I didn't fully debug this, but it seems to reduce memory.
Let me know if you want me to do a PR with these changes so I can use your repo instead.
Very interested in the work you're doing. Speed and memory efficiency are crucial for anyone trying to generate at scale.
We've implemented Token Merging: facebookresearch/ToMe#7
This has a speed overhead increase of about 15% from the naive implementation at 512x512, although this goes up as array sizes increase. The memory overhead reduction is significant, and can allow for much larger image generation.
The text was updated successfully, but these errors were encountered: