Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Confusion on the inference pipeline #105

Open
AmmonZ opened this issue Jun 15, 2023 · 0 comments
Open

Confusion on the inference pipeline #105

AmmonZ opened this issue Jun 15, 2023 · 0 comments

Comments

@AmmonZ
Copy link

AmmonZ commented Jun 15, 2023

Can anyone share their opinion to help me clear up my confusion? I understand that the main idea behind DeepSDF is to use features that "embed" the underlying information of the mesh, which is driven by the sdf. These features can be optimized during testing. However, in the reconstruct.py file, I noticed that DeepSDF directly uses the ShapeNet processed SDF ground truth validation set (gt val) as the gt_sdf to supervise the optimization of the features during inference.

My question is whether it is accurate to say that DeepSDF uses ground truth testing meshes from ShapeNet to evaluate its performance. The reason I ask is that this seems to be the procedure: testing gt meshes -> testing gt sdfs -> using them as supervision for optimization during inference. If this is the case, then any compared methods should also use ground truth testing meshes for fairness, correct?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant