You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is a roadmap to removing TensorStorage types (EmptyStorage, Dense, Diag, BlockSparse, DiagBlockSparse, Combiner) in favor of more traditional AbstractArray types (UnallocatedZeros, Array, DiagonalArray, BlockSparseArray, CombinerArray), as well as removing Tensor in favor of NamedDimsArray.
Move some functionality to SparseArrayInterface, such as TensorAlgebra.contract.
Clean up tensor algebra code in BlockSparseArray, making use of broadcasting and mapping functionality defined in SparseArrayInterface.
Followup to SparseArrayInterface/SparseArrayDOKs defined in #1270:
TensorAlgebra overloads for SparseArrayInterface/SparseArrayDOK, such as contract.
Use SparseArrayDOK as a backend for BlockSparseArray (maybe call it BlockSparseArrayDOK?).
Consider making a BlockSparseArrayInterface package to define an interface and generic functionality for block sparse arrays, analogous to SparseArrayInterface (EDIT: Currently lives inside BlockSparseArray library.)
Move low rank qr, eigen, svd definitions to NDTensors.RankFactorization module. Currently they are defined in NamedDimsArrays.NamedDimsArraysTensorAlgebraExt, those should be wrappers around the ones in NDTensors.RankFactorization.
Split off the SparseArray type into an NDTensors.SparseArrays module (maybe come up with a different name like NDSparseArrays, GenericSparseArrays, AbstractSparseArrays, etc.). Currently it is in NDTensors.BlockSparseArrays. Also rename it SparseArrayDOK (for dictionary-of-keys) to distinguish it from other formats.
Clean up NDTensors/src/TensorAlgebra/src/fusedims.jl.
Remove NDTensors.TensorAlgebra.BipartitionedPermutation, figure out how to disambiguate between partitioned permutation and named dimension interface. How much dimension name logic should go in NDTensors.TensorAlgebra vs. NDTensors.NamedDimsArrays?
Create NDTensors.CombinerArrays module. Move Combiner and CombinerArray type definitions there.
Create NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt extension. Move Combinercontract definition from ITensorsNamedDimsArraysExt/src/combiner.jl to CombinerArraysTensorAlgebraExt (which is just a simple wrapper around TensorAlgebra.fusedims and TensorAlgebra.splitdims).
Dispatch ITensors.jl definitions for qr, eigen, svd, factorize, nullspace, etc. on typeof(tensor(::ITensor)) so then for ITensor wrapping NamedDimsArray we can fully rewrite those functions using NamedDimsArrays and TensorAlgebra where the matricization logic can be handled more elegantly with fusedims.
Get all the same functionality working for ITensor wrapping a NamedDimsArray wrapping a BlockSparseArray.
Make sure all NamedDimsArrays-based code works on GPU.
Make Index a subtype of AbstractNamedInt (or maybe AbstractNamedUnitRange?).
Make ITensor a subtype of AbstractNamedDimsArray.
Deprecate from NDTensors.RankFactorization: Spectrum, eigs, entropy, truncerror.
Decide if size and axes of AbstractNamedDimsArray (including the ITensor type) should output named sizes and ranges.
Define an ImmutableArrays submodule and have the ITensor type default to wrapping ImmutableArray data, with copy-on-write semantics. Also come up with an abstraction for arrays that can manage their own memory, such as AbstractCOWArray (for copy-on-write) or AbstractMemoryManagedArray, as well as NamedDimsArray versions, and make ITensor a subtype of AbstractMemoryManagedNamedDimsArray or something like that (perhaps a good use case for an isnamed trait to opt-in to automatic permutation semantics for indexing, contraction, etc.).
Use StaticPermutations.jl for dimension permutation logic in TensorAlgebra and NamedDimsArrays.
Testing
Unit tests for ITensors.ITensorsNamedDimsArraysExt.
Run ITensorsNamedDimsArraysExt examples in tests.
Unit tests for NDTensors.RankFactorization module.
Unit tests for NamedDimsArrays.NamedDimsArraysTensorAlgebraExt: fusedims, qr, eigen, svd.
Unit tests for NDTensors.CombinerArrays and NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt.
Use BlockSparseArray as default data type instead of BlockSparse in ITensor QN constructors.
DiagBlockSparse
Use BlockSparseArray with blocks storing DiagonalArray, make sure all tensor operations work.
Replace DiagBlockSparse in ITensor QN constructors.
Combiner
Not sure what to do with this, but a lot of functionality will be replaced by the new fusedims/matricize functionality in TensorAlgebra/BlockSparseArrays and also by the new FusionTensor type. Likely will be superseded by CombinerArray, FusionTree, or something like that.
Simplify ITensor and Tensor constructors
Make ITensor constructors more uniform by using a style tensor(storage::AbstractArray, inds::Tuple), avoid constructors like DenseTensor, DiagTensor, BlockSparseTensor, etc.
Use rand(i, j, k), randn(i, j, k), zeros(i, j, k), fill(1.2, i, j, k), diagonal(i, j, k), etc. instead of randomITensor(i, j, k), ITensor(i, j, k), ITensor(1.2, i, j, k), diagITensor(i, j, k). Maybe make lazy/unallocated by default where appropriate, i.e. use UnallocatedZeros for zeros and UnallocatedFill for fill.
Consider randn(2, 2)(i, j) as a shorthand for creating an ITensor with indices (i, j) wrapping an array. Also could use setinds(randn(2, 2), i, j).
Remove automatic conversion to floating point in ITensor constructor.
Define TensorAlgebra submodule
TensorAlgebra submodule which defines contract[!][!], mul[!][!], add[!][!], permutedims[!][!], fusedims/matricize, contract(::Algorithm"matricize", ...), truncated QR, eigendecomposition, SVD, etc. with generic fallback implementations for AbstractArray and maybe some specialized implementations for Array.(Started in [NDTensors] Start TensorAlgebra module #1265, [TensorAlgebra] Matricized QR tensor decomposition #1266.)
Use ErrorTypes.jl for catching errors and calling fallbacks in failed matrix decompositions.
Move most matrix factorization logic from ITensors.jl into TensorAlgebra.
New Tensor semantics
Make Tensor fully into a wrapper array type with named dimensions, with similar "smart indices" for contraction and addition like the ITensor type has right now. Rename to NamedDimsArray. (Started in [NDTensors] NamedDimsArrays module #1267.)
Use struct NamedAxis{Axis,Name} axis::Axis; name::Name; end as a more generic version of Index, where Index has a name that store the ID, tags, and prime level. (Started in [NDTensors] NamedDimsArrays module #1267.)
mtfishman
changed the title
[NDTensors] [ENHANCEMENT] Roadmap to removing TensorStorage types
[NDTensors] Roadmap to removing TensorStorage types
May 17, 2024
Here is a roadmap to removing
TensorStorage
types (EmptyStorage
,Dense
,Diag
,BlockSparse
,DiagBlockSparse
,Combiner
) in favor of more traditionalAbstractArray
types (UnallocatedZeros
,Array
,DiagonalArray
,BlockSparseArray
,CombinerArray
), as well as removingTensor
in favor ofNamedDimsArray
.NDTensors reorganization
Followup to
BlockSparseArrays
rewrite in #1272:SparseArrayInterface
, such asTensorAlgebra.contract
.BlockSparseArray
, making use of broadcasting and mapping functionality defined inSparseArrayInterface
.Followup to
SparseArrayInterface
/SparseArrayDOKs
defined in #1270:TensorAlgebra
overloads forSparseArrayInterface
/SparseArrayDOK
, such ascontract
.SparseArrayDOK
as a backend forBlockSparseArray
(maybe call itBlockSparseArrayDOK
?).BlockSparseArrayInterface
package to define an interface and generic functionality for block sparse arrays, analogous toSparseArrayInterface
(EDIT: Currently lives insideBlockSparseArray
library.)Followup to the reorganization started in #1268:
qr
,eigen
,svd
definitions toNDTensors.RankFactorization
module. Currently they are defined inNamedDimsArrays.NamedDimsArraysTensorAlgebraExt
, those should be wrappers around the ones inNDTensors.RankFactorization
.SparseArray
type into anNDTensors.SparseArrays
module (maybe come up with a different name likeNDSparseArrays
,GenericSparseArrays
,AbstractSparseArrays
, etc.). Currently it is inNDTensors.BlockSparseArrays
. Also rename itSparseArrayDOK
(for dictionary-of-keys) to distinguish it from other formats.NDTensors/src/TensorAlgebra/src/fusedims.jl
.NDTensors.TensorAlgebra.BipartitionedPermutation
, figure out how to disambiguate between partitioned permutation and named dimension interface. How much dimension name logic should go inNDTensors.TensorAlgebra
vs.NDTensors.NamedDimsArrays
?NDTensors.CombinerArrays
module. MoveCombiner
andCombinerArray
type definitions there.NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt
extension. MoveCombiner
contract
definition fromITensorsNamedDimsArraysExt/src/combiner.jl
toCombinerArraysTensorAlgebraExt
(which is just a simple wrapper aroundTensorAlgebra.fusedims
andTensorAlgebra.splitdims
).qr
,eigen
,svd
,factorize
,nullspace
, etc. ontypeof(tensor(::ITensor))
so then forITensor
wrappingNamedDimsArray
we can fully rewrite those functions usingNamedDimsArrays
andTensorAlgebra
where the matricization logic can be handled more elegantly withfusedims
.ITensor
wrapping aNamedDimsArray
wrapping aBlockSparseArray
.NamedDimsArrays
-based code works on GPU.Index
a subtype ofAbstractNamedInt
(or maybeAbstractNamedUnitRange
?).ITensor
a subtype ofAbstractNamedDimsArray
.NDTensors.RankFactorization
:Spectrum
,eigs
,entropy
,truncerror
.size
andaxes
ofAbstractNamedDimsArray
(including theITensor
type) should output named sizes and ranges.ImmutableArrays
submodule and have theITensor
type default to wrappingImmutableArray
data, with copy-on-write semantics. Also come up with an abstraction for arrays that can manage their own memory, such asAbstractCOWArray
(for copy-on-write) orAbstractMemoryManagedArray
, as well asNamedDimsArray
versions, and makeITensor
a subtype ofAbstractMemoryManagedNamedDimsArray
or something like that (perhaps a good use case for anisnamed
trait to opt-in to automatic permutation semantics for indexing, contraction, etc.).TensorAlgebra
andNamedDimsArrays
.Testing
ITensors.ITensorsNamedDimsArraysExt
.ITensorsNamedDimsArraysExt
examples in tests.NDTensors.RankFactorization
module.NamedDimsArrays.NamedDimsArraysTensorAlgebraExt
:fusedims
,qr
,eigen
,svd
.NDTensors.CombinerArrays
andNDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt
.EmptyStorage
UnallocatedZeros
(in progress in [NDTensors]UnallocatedArrays
andUnspecifiedTypes
#1213).UnallocatedZeros
as default data type instead ofEmptyStorage
in ITensor constructors.Diag
DiagonalArray
.DiagonalArray
as default data type instead ofDiag
in ITensor constructors.UniformDiag
DiagonalArray
wrapping anUnallocatedZeros
type.BlockSparse
BlockSparseArray
.BlockSparseArray
as default data type instead ofBlockSparse
in ITensor QN constructors.DiagBlockSparse
BlockSparseArray
with blocks storingDiagonalArray
, make sure all tensor operations work.DiagBlockSparse
in ITensor QN constructors.Combiner
fusedims
/matricize
functionality inTensorAlgebra
/BlockSparseArrays
and also by the newFusionTensor
type. Likely will be superseded byCombinerArray
,FusionTree
, or something like that.Simplify ITensor and Tensor constructors
tensor(storage::AbstractArray, inds::Tuple)
, avoid constructors likeDenseTensor
,DiagTensor
,BlockSparseTensor
, etc.rand(i, j, k)
,randn(i, j, k)
,zeros(i, j, k)
,fill(1.2, i, j, k)
,diagonal(i, j, k)
, etc. instead ofrandomITensor(i, j, k)
,ITensor(i, j, k)
,ITensor(1.2, i, j, k)
,diagITensor(i, j, k)
. Maybe make lazy/unallocated by default where appropriate, i.e. useUnallocatedZeros
forzeros
andUnallocatedFill
forfill
.randn(2, 2)(i, j)
as a shorthand for creating an ITensor with indices(i, j)
wrapping an array. Also could usesetinds(randn(2, 2), i, j)
.Define
TensorAlgebra
submoduleTensorAlgebra
submodule which definescontract[!][!]
,mul[!][!]
,add[!][!]
,permutedims[!][!]
,fusedims
/matricize
,contract(::Algorithm"matricize", ...)
, truncated QR, eigendecomposition, SVD, etc. with generic fallback implementations forAbstractArray
and maybe some specialized implementations forArray
.(Started in [NDTensors] StartTensorAlgebra
module #1265, [TensorAlgebra] Matricized QR tensor decomposition #1266.)TensorAlgebra
.New
Tensor
semanticsTensor
fully into a wrapper array type with named dimensions, with similar "smart indices" for contraction and addition like theITensor
type has right now. Rename toNamedDimsArray
. (Started in [NDTensors]NamedDimsArrays
module #1267.)struct NamedAxis{Axis,Name} axis::Axis; name::Name; end
as a more generic version ofIndex
, whereIndex
has aname
that store the ID, tags, and prime level. (Started in [NDTensors]NamedDimsArrays
module #1267.)ITensors.val
for named indexing with dictionaries attached to dimensions/axes, like in AxisKeys.jl, DimensionalData.jl, NamedArrays.jl, etc.The text was updated successfully, but these errors were encountered: