-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好!感觉您的代码和论文公式表示的有些区别 #29
Comments
Hi,公式4指的是z的一个patch 是由所有patch加权求和得到的,权重由x和y直间patch的相似度决定,相似度由点乘来衡量,等价于将y切成pp的patch 作为conv filter对x进行卷积。拿到x上每一个patch对所有y的patch的权重之后,用反卷积conv_transpose2d将所有patch加权求和,这时每一个patch的大小是spsp。 另外可以参考contextual inpainting (https://github.com/daa233/generative-inpainting-pytorch) 的代码实现,跟这篇文章用了类似的想法,以及(https://github.com/SHI-Labs/Cross-Scale-Non-Local-Attention/issues/3)。 |
十分感谢!另外我想问一下训练细节相关的问题: |
|
关于Patch-Based Cross-Scale Non-Local Attention
如论文公式(4)所示,Z是一个patch(si,sj)对应一个X的patch(i,j)来计算的;而在相应代码的F.conv_transpose2d那行,每个Z的像素点似乎是由X相应位置的9个patch来计算的。
意思不太好表达,不知我说清楚了没有。。。请大佬赐教。
The text was updated successfully, but these errors were encountered: