You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Congratulations!
I got a question that why you try to share paramters between those sub-networks. Is there any other motivations except for just reducing the number of paramters, or maybe some theories, explainations and experiments on it?
I will be appreciated if you could reply as soon as you could .
The text was updated successfully, but these errors were encountered:
Thank you for your relpy! @JihyongOh
Acutally, I wonder if parameters can be shared among those sub-networks. Well, results shows that this method is useful. I think, there shuold be a reason why those parameters can be shared. Maybe this method can be applied to other similar tasks.
@nemoHy
To clarify, three sub-networks (BiFlownet, TFlownet, Refinement Block) of Fig. 4 are not shared each other, but can be shared across scale levels as in Fig. 3. You can also check that those three sub-networks are independent (not shared) in provided PyTorch code.
Thank you for your reply again! Sorry for my inappropriate expression. @hjSim@JihyongOh
I know that three sub-networks (BiFlownet, TFlownet, Refinement Block) of Fig. 4 are not shared each other, but my question is about the sharing across scale levels. I know it can save parameters and works well in pratice, but is there any theoretical explanation why it can be shared across scales and works so well.
Hi! Congratulations!
I got a question that why you try to share paramters between those sub-networks. Is there any other motivations except for just reducing the number of paramters, or maybe some theories, explainations and experiments on it?
I will be appreciated if you could reply as soon as you could .
The text was updated successfully, but these errors were encountered: