You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I join the question about the fusion part. As I understood, the main correspondence happens with grid_sample call where point cloud coordinates represented in normalized form [-1; 1]. But I have doubts if these coordinates can be directly mapped to image feature maps except the original image with projection matrix. How this mapping works?
How can we be sure that coordinates xy mapped to feature_map correctly? I have these question because CNN layers have produce images with different sizes
Hello, I find your code "
class Fusion_Conv
" concactpoint_features
andimg_features
directly. Where is your "w" part in your code?The text was updated successfully, but these errors were encountered: