You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great work! However, I have some questions regarding the image generation part.
Specifically, how is the pixel decoder implemented? Could you provide more details? My current understanding is that a decoder is needed to be retrained to map the discriminative tokens back to the pixel space, but achieving such high-quality generation results based on this alone leaves me a bit puzzled. Or is it that the VQGAN part is entirely fixed and does not require training, with the discriminative tokens serving only as conditional inputs to generate the corresponding VQ tokens, which are then mapped back to the pixel space by the VQ decoder?
Looking forward to your response.
The text was updated successfully, but these errors were encountered:
Great work! However, I have some questions regarding the image generation part.
Specifically, how is the pixel decoder implemented? Could you provide more details? My current understanding is that a decoder is needed to be retrained to map the discriminative tokens back to the pixel space, but achieving such high-quality generation results based on this alone leaves me a bit puzzled. Or is it that the VQGAN part is entirely fixed and does not require training, with the discriminative tokens serving only as conditional inputs to generate the corresponding VQ tokens, which are then mapped back to the pixel space by the VQ decoder?
Looking forward to your response.
The text was updated successfully, but these errors were encountered: