-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About Cond P-Diff #24
Comments
Very good job! I also look forward to COND P-Diff! |
Hi, thanks for your attention to our work. We will opensource by the end of Sep. |
Thank you for your confirmation! That's great. |
Hi, do we have any updates regarding the new code? |
Any updates on the timeline for conditional P-diff? |
Hi, due to GPU issues, we will update by next week. |
Hi, can we get access to the parameter autoencoder and the unet architecture used in conditional P-diff first? |
Hi, I have sent all the code and dataset through email. Please check. We will reformat our code soon. |
I recieved it. Thank you so much! |
Hi, any updates on the timeline for conditional P-diff? |
Hi, please email to [email protected]. I will share all the details. |
@Jinxiaolong1129 @zhanglijun95. Hi, I hope I'm not interrupting. I am reading and reproducing "Conditional LoRA Parameter Generation", could you please send me a copy of the code page? If it is convenient, please email to [email protected] Thank you very much! |
Hi Authors,
Thank you for your great work! It inspired me a lot! I'm really looking forward to your code for Cond P-Diff. May I know the estimated time for getting access to that?
Besides, I have a question about Cond P-Diff. I saw the CV task in this paper is style image generation and Cond P-Diff will generate parameters according to the conditions, namely the style image. I want to know when you test Cond P-Diff, do you give it the style image it is trained with, or a totally new/unseen style? For example, train the Cond P-Diff with 10 style-parameter pairs, and test with another 5 styles.
I noticed that in the Appendix, you mentioned the style-continuous dataset and the generalizability of Cond P-Diff to generate parameters for style in the range that is not in the trainset. But here I want to discuss with you that do you think it can generate parameters for a totally unseen style? Or do you have any insight about this?
Really appreciate your response and great work. Thank you!
Best,
Lijun
The text was updated successfully, but these errors were encountered: