We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,作者!
我对计算量与内存消耗的对比实验很感兴趣,希望能够对其进行复现。
我在编译完tvm后,在RTX 3080 10GB、Ubuntu 20.04、Python 3.7、pytorch 1.8.0、CUDA 11.1、TVM 0.8.0条件下运行 python single_step_main.py -data_path data/flow/ -dataset flow -use_tvm后,训练过程中占用显存为7.77GB,速度为4.27it/s,感觉与论文中图4展示的结果差距较大,因此对这部分实验产生了兴趣。
python single_step_main.py -data_path data/flow/ -dataset flow -use_tvm
请问论文中图4的横坐标sequence length对应的是程序中的input_size吗?能够提供更加详细的实验条件吗?
The text was updated successfully, but these errors were encountered:
你好,感谢你关注我们的工作. 论文中图4展示的是不同Attention机制单层的显存占用和时间消耗随输入序列长度的变化,并非训练脚本的数据,旨在直观展示不同Attention机制的时空复杂度。数据可以通过下面的脚本来复现: https://github.com/ant-research/Pyraformer/blob/master/pyraformer/graph_attention.py
Sorry, something went wrong.
好的,十分感谢您的解答!
No branches or pull requests
您好,作者!
我对计算量与内存消耗的对比实验很感兴趣,希望能够对其进行复现。
我在编译完tvm后,在RTX 3080 10GB、Ubuntu 20.04、Python 3.7、pytorch 1.8.0、CUDA 11.1、TVM 0.8.0条件下运行
python single_step_main.py -data_path data/flow/ -dataset flow -use_tvm
后,训练过程中占用显存为7.77GB,速度为4.27it/s,感觉与论文中图4展示的结果差距较大,因此对这部分实验产生了兴趣。请问论文中图4的横坐标sequence length对应的是程序中的input_size吗?能够提供更加详细的实验条件吗?
The text was updated successfully, but these errors were encountered: