Skip to content

Latest commit

 

History

History
53 lines (37 loc) · 4.34 KB

README.md

File metadata and controls

53 lines (37 loc) · 4.34 KB

TensorOperations.jl

Fast tensor operations using a convenient Einstein index notation.

Documentation Build Status Digital Object Identifier
DOI

TensorOperations v2.0.0 represents a significant update and rewrite from previous versions.

  • Tensoroperations.jl now exports an ncon method, familiar in the quantum tensor network community and mostly compatible with e.g. arXiv:1402.0939. Unlike the @tensor which has been at the heart of TensorOperations.jl, the ncon analyzes the network at runtime, and as a consequence has a non-inferrable output. On the other hand, this allows to use dynamical index specifications which are not known at compile time. There is also an @ncon macro which uses the same format and also allows for dynamical index specifications, but has the advantage that it adds a hook into the global LRU cache where temporary objects are stored and recycled.

  • TensorOperations.jl now supports CuArray objects via the NVidia's CUTENSOR library, which is wrapped in CuArrays.jl. This requires that the latter is also loaded with using CuArrays. CuArray objects can directly be used in the existing calls and macro environments like @tensor and @ncon. However, no operation should try to mix a normal Array and a CuArray. There is also a new @cutensor macro which will transform all array objects to the GPU and perform the contractions and permutations there. Objects are moved to the GPU when they are first needed, so that transfer times of later objects can coincide with computation time for operations on earlier objects.

  • TensorOperations.jl now has a @notensor macro to indicate that a block within an @tensor environment (or @tensoropt or @cutensor) should be left alone and contains valid Julia code that should not be transformed.

Code Example

TensorOperations.jl is mostly used through the @tensor macro which allows one to express a given operation in terms of index notation format, a.k.a. Einstein notation (using Einstein's summation convention).

using TensorOperations
α=randn()
A=randn(5,5,5,5,5,5)
B=randn(5,5,5)
C=randn(5,5,5)
D=zeros(5,5,5)
@tensor begin
    D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
    E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
end

In the second to last line, the result of the operation will be stored in the preallocated array D, whereas the last line uses a different assignment operator := in order to define and allocate a new array E of the correct size. The contents of D and E will be equal.

For more information, please see the docs.