-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ITensors] Improve type stability of svdMPO
and qn_svdMPO
#1183
[ITensors] Improve type stability of svdMPO
and qn_svdMPO
#1183
Conversation
This change leads to significant improvements in terms of performance
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the GitHub App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## main #1183 +/- ##
===========================================
- Coverage 85.40% 67.37% -18.03%
===========================================
Files 88 87 -1
Lines 8405 8389 -16
===========================================
- Hits 7178 5652 -1526
- Misses 1227 2737 +1510
☔ View full report in Codecov by Sentry. |
Hi @terasakisatoshi, thanks a lot for the PR. Nice to see a relatively small change leads to big improvements in speed and memory allocations. Your pull request inspired me to make my own here: #1184 that essentially implements the same design you suggest of creating a function barrier, but also updates some of the style of the surrounding code which I noticed was a bit outdated. Would you mind benchmarking that PR to see if it leads to the same performance improvements as this PR for your use case? Also, it's neat to see that you are passing Hamiltonians generated by OpenFermion into ITensor. We have similar functionality in an experimental package called ITensorChemistry.jl which calls out to PySCF using PythonCall.jl. We found that generating the MPOs for generic quantum chemistry Hamiltonians can scale pretty badly with the number of orbitals, which was one of the inspirations for the package ITensorParallel.jl, which is based around splitting up the Hamiltonian into parts and generating sums of MPOs instead of a single MPO, which can be more efficient. |
This advice is very helpful to me. I would definitely like to try the package you suggested. |
Thanks for catching this issue! |
Co-authored-by: Matt Fishman <[email protected]>
Co-authored-by: Matt Fishman <[email protected]>
@terasakisatoshi I made some suggestions for comments to add to remind us why the code is written the way it is. Besides that, it looks good from my end. |
add a comment Co-authored-by: Matt Fishman <[email protected]>
add a comment Co-authored-by: Matt Fishman <[email protected]>
add a comment Co-authored-by: Matt Fishman <[email protected]>
add a comment Co-authored-by: Matt Fishman <[email protected]>
It's time to merge? |
Description
This PR improves the performance of
svdMPO
andqn_svdMPO
which are called fromMPO(os::OpSum, sites::Vector{<:Index})::MPO
.My research has revealed the following:
Getting
ValType = determineValType(terms(os))
inside the methodsvdMPO
and definingVs = [Matrix{ValType}(undef, 1, 1) for n in 1:N]
without type annotation causes type instability.Type instability causes the following code sections to run slowly.
ITensors.jl/src/physics/autompo/opsum_to_mpo.jl
Lines 96 to 113 in b54e8d7
If practical and applicable, please include a minimal demonstration of the previous behavior and new behavior below.
Minimal demonstration of previous behavior
python qph.py > paulis.jl
It will create a file
paulis.jl
.You can also find
paulis.jl
from my gist.mpo_bench.jl
to show our Pull Request improves performance for creatingMPO
.On
main
branch we have:Minimal demonstration of new behavior
Our new pull request reduces the number of allocations and improves the performance of getting
MPO(os, site)
.How Has This Been Tested?
Checklist:
using JuliaFormatter; format(".")
in the base directory of the repository (~/.julia/dev/ITensors
) to format your code according to our style guidelines.