You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First I want to show my sincere gratitude for this great work! When I was trying to train my own model of stage 1, learning face priors, I used 1024×1024 FFHQ to create the DECA lmdb. But during the training stage, the mse loss stuck at about 0.002 while the pre-trained model provided by this project has about 0.001 mse loss, which leads to poor performance of my trained model during inference.
I wonder whether such difference was caused by the dataset I use or the DECA environment. I tried to print the surface normal, the albedo and the Lambertian rendering images before storing them into lmdb. I found that there are abnormal green spots for the Lambertian rendering image, and I wonder whether this is general or will it lead to the poor training results. Hope somebody could give me some help about fixing it.
Here is the example images for the abnormal green spots for Lambertian rendering:
And here is the example for the poor performance while inferenced with the above mentioned trained model:
The text was updated successfully, but these errors were encountered:
Could you provide more details on how you get the lambertian rendering and what is the exact image you're encountering the problem? It seems fine on my end.
First I want to show my sincere gratitude for this great work! When I was trying to train my own model of stage 1, learning face priors, I used 1024×1024 FFHQ to create the DECA lmdb. But during the training stage, the mse loss stuck at about 0.002 while the pre-trained model provided by this project has about 0.001 mse loss, which leads to poor performance of my trained model during inference.
I wonder whether such difference was caused by the dataset I use or the DECA environment. I tried to print the surface normal, the albedo and the Lambertian rendering images before storing them into lmdb. I found that there are abnormal green spots for the Lambertian rendering image, and I wonder whether this is general or will it lead to the poor training results. Hope somebody could give me some help about fixing it.
Here is the example images for the abnormal green spots for Lambertian rendering:
And here is the example for the poor performance while inferenced with the above mentioned trained model:
The text was updated successfully, but these errors were encountered: