Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TensorAlgebra] BlockSparseArray tensor contraction #1301

Merged
merged 9 commits into from
Jan 9, 2024

Conversation

mtfishman
Copy link
Member

@mtfishman mtfishman commented Jan 5, 2024

This is work in progress towards getting BlockSparseArray tensor contraction working (for now, independent of symmetries).

The design right now is that BlockSparseArray contraction is implemented through the generic matricized contract code in TensorAlgebra. This relies on having proper implementations of permutation, generalized reshape (which I call fusedims and splitdims in TensorAlgebra), and matrix multiplication, so the actual implementation of block sparse array contraction mostly relies on the generic one in TensorAlgebra and relies on those operations being defined.

The current matricized contract code already works for other array types like DiagonalArray and SparseArrayDOK since they implement the required permutation, reshape, and matrix multiplication functionality.

This will generalize to symmetric tensors with even more specalized implementations of fusedims and splitdims that actually fuse and split the blocks according to symmetry sector fusion, but that will be left for future work and will rely on the work in #1296 (though we could start testing it without that by making simple abelian group irrep types and using GradedAxes as the axes of a BlockSparseArray).

@codecov-commenter
Copy link

codecov-commenter commented Jan 5, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (921f362) 84.04% compared to head (1e6535d) 53.93%.
Report is 1 commits behind head on main.

❗ Current head 1e6535d differs from pull request most recent head 408b6c6. Consider uploading reports for the commit 408b6c6 to get more accurate results

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1301       +/-   ##
===========================================
- Coverage   84.04%   53.93%   -30.11%     
===========================================
  Files         100       99        -1     
  Lines        8544     8491       -53     
===========================================
- Hits         7181     4580     -2601     
- Misses       1363     3911     +2548     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@emstoudenmire
Copy link
Collaborator

I'll be really interested to see how the performance turns out when using the generic matricized contract. It would be great if it's already quite good in many cases. What do you expect or do you already have some timings?

@mtfishman
Copy link
Member Author

mtfishman commented Jan 8, 2024

I'll be really interested to see how the performance turns out when using the generic matricized contract. It would be great if it's already quite good in many cases. What do you expect or do you already have some timings?

Yeah I'm curious about that too. I think I could argue that if written efficiently, matricized contraction is either about the same as what we are doing currently if no QN merging is done, since it requires the same tensor permutations of each block of the input tensors, and then once those permutations are done it requires the same matrix multiplication. It could be faster if QN merging is done depending on the particulars of the symmetries, sparsity pattern, tensor order, etc. Currently I'm focusing more on code organization, generic interfaces, and correctness, but once the code structure is in place and working the next step would be to do benchmarking/profiling and optimization.

One tricky part with making the version that does fusion fast is the operation of permuting blocks into slices of the matricized tensor with fused spaces, which is handled by complicated QN combiner code logic in the current NDTensors library. I think the current code design will make this operation easier to implement, and also more general since it allows for fusing/combining multiple groups of indices at once, but it will require some subtle code design to keep the code both simple and make it fast/avoid unnecessary temporary tensor operations.

@mtfishman mtfishman marked this pull request as ready for review January 9, 2024 15:39
@mtfishman mtfishman merged commit 78e8bd4 into main Jan 9, 2024
7 checks passed
@mtfishman mtfishman deleted the TensorAlgebra_blocksparse_contract branch January 9, 2024 20:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants