You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @jackchinor , you can use any depth dataset and train a backbone model for depth in/outpainting in a self-supervised way using any unpaired masks (randomly generated, segmentation masks, etc). Then during inference, the model can be applied twice by inverting the input mask, which should be a quasi-accurate mask that you'd like to use for refinement. Our approach is very general and should work with any backbone architecture and dataset. All necessary details are in the paper. Feel free to let me know if you have any other questions!
Hi, could you share some training source or tips? I want to add layered depth refinement to depth estimation task, but don't know how to start. Thanks
The text was updated successfully, but these errors were encountered: