Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

您好!感觉您的代码和论文公式表示的有些区别 #29

Open
qizhou000 opened this issue Apr 24, 2021 · 3 comments
Open

您好!感觉您的代码和论文公式表示的有些区别 #29

qizhou000 opened this issue Apr 24, 2021 · 3 comments

Comments

@qizhou000
Copy link

关于Patch-Based Cross-Scale Non-Local Attention

如论文公式(4)所示,Z是一个patch(si,sj)对应一个X的patch(i,j)来计算的;而在相应代码的F.conv_transpose2d那行,每个Z的像素点似乎是由X相应位置的9个patch来计算的。
意思不太好表达,不知我说清楚了没有。。。请大佬赐教。

@HarukiYqM
Copy link
Collaborator

HarukiYqM commented Apr 24, 2021

Hi,公式4指的是z的一个patch 是由所有patch加权求和得到的,权重由x和y直间patch的相似度决定,相似度由点乘来衡量,等价于将y切成pp的patch 作为conv filter对x进行卷积。拿到x上每一个patch对所有y的patch的权重之后,用反卷积conv_transpose2d将所有patch加权求和,这时每一个patch的大小是spsp。

另外可以参考contextual inpainting (https://github.com/daa233/generative-inpainting-pytorch) 的代码实现,跟这篇文章用了类似的想法,以及(https://github.com/SHI-Labs/Cross-Scale-Non-Local-Attention/issues/3)。

@qizhou000
Copy link
Author

十分感谢!另外我想问一下训练细节相关的问题:
1、一个epoch包含几次迭代?
2、各个scale大概都训练了多久?用了几张显卡?
如能告知,不胜感激。

@HarukiYqM
Copy link
Collaborator

HarukiYqM commented Apr 25, 2021

十分感谢!另外我想问一下训练细节相关的问题:
1、一个epoch包含几次迭代?
2、各个scale大概都训练了多久?用了几张显卡?
如能告知,不胜感激。

  1. 一个epoch batch size 16 迭代1000 次。 可以按照demo里的默认setting跑。关于训练细节可以另外参考(https://github.com/sanghyun-son/EDSR-PyTorch). The training part of repo is fully based on EDSR-PyTorch.
  2. 4*v100可能需要5天左右。之前训练的时候集群资源有限,在训练时卡数(best available)不是固定的, 时间可能不准确。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants