You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See \link cytnx::linalg cytnx::linalg \endlink for further details
func | inplace | CPU | GPU | callby tn
----------|-----------|-----|------|-----------
\link cytnx::linalg::Add Add\endlink | x | Y | Y | Y
\link cytnx::linalg::Sub Sub\endlink | x | Y | Y | Y
\link cytnx::linalg::Mul Mul\endlink | x | Y | Y | Y
\link cytnx::linalg::Div Div\endlink | x | Y | Y | Y
\link cytnx::linalg::Cpr Cpr\endlink | x | Y | Y | Y
\link cytnx::linalg::Mod Mod\endlink | x | Y | Y | Y
+,+=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Add_ Tensor.Add_\endlink)
-,-=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Sub_ Tensor.Sub_\endlink)
*,*=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Mul_ Tensor.Mul_\endlink)
/,/=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Div_ Tensor.Div_\endlink)
== [tn]| x | Y | Y | Y (\link cytnx::Tensor::Cpr_ Tensor.Cpr_\endlink)
\link cytnx::linalg::Svd Svd\endlink | x | Y | Y | Y
*\link cytnx::linalg::Svd_truncate Svd_truncate\endlink | x | Y | Y | N
\link cytnx::linalg::InvM InvM\endlink | \link cytnx::linalg::InvM_ InvM_\endlink | Y | Y | Y
\link cytnx::linalg::Inv Inv\endlink | \link cytnx::linalg::Inv_ Inv_\endlink | Y | Y | Y
\link cytnx::linalg::Conj Conj\endlink | \link cytnx::linalg::Conj_ Conj_\endlink | Y | Y | Y
\link cytnx::linalg::Exp Exp\endlink | \link cytnx::linalg::Exp_ Exp_\endlink | Y | Y | Y
\link cytnx::linalg::Expf Expf\endlink | \link cytnx::linalg::Expf_ Expf_\endlink | Y | Y | Y
*\link cytnx::linalg::ExpH ExpH\endlink | x | Y | Y | N
*\link cytnx::linalg::ExpM ExpM\endlink | x | Y | Y | N
\link cytnx::linalg::Eigh Eigh\endlink | x | Y | Y | Y
\link cytnx::linalg::Matmul Matmul\endlink | x | Y | Y | N
\link cytnx::linalg::Diag Diag\endlink | x | Y | Y | N
*\link cytnx::linalg::Tensordot Tensordot\endlink | x | Y | Y | N
\link cytnx::linalg::Outer Outer\endlink | x | Y | Y | N
\link cytnx::linalg::Kron Kron\endlink | x | Y | N | N
\link cytnx::linalg::Norm Norm\endlink | x | Y | Y | Y
\link cytnx::linalg::Vectordot Vectordot\endlink | x | Y | .Y | N
\link cytnx::linalg::Tridiag Tridiag\endlink | x | Y | N | N
*\link cytnx::linalg::Dot Dot\endlink | x | Y | Y | N
\link cytnx::linalg::Eig Eig\endlink | x | Y | N | Y
\link cytnx::linalg::Pow Pow\endlink | \link cytnx::linalg::Pow_ Pow_\endlink | Y | Y | Y
\link cytnx::linalg::Abs Abs\endlink | \link cytnx::linalg::Abs_ Abs_\endlink | Y | N | Y
\link cytnx::linalg::Qr Qr\endlink | x | Y | N | N
\link cytnx::linalg::Qdr Qdr\endlink | x | Y | N | N
\link cytnx::linalg::Min Min\endlink | x | Y | N | Y
\link cytnx::linalg::Max Max\endlink | x | Y | N | Y
*\link cytnx::linalg::Trace Trace\endlink | x | Y | N | N
iterative solver
\link cytnx::linalg::Lanczos_ER Lanczos_ER\endlink
* this is a high level linalg
^ this is temporary disable
. this is floating point type only
See \link cytnx::random cytnx::random \endlink for further details
func | Tn | Stor | CPU | GPU
----------|-----|------|-----|-----------
*\link cytnx::random::Make_normal Make_normal\endlink | Y | Y | Y | Y
^\link cytnx::random::normal normal\endlink | Y | x | Y | Y
* this is initializer
^ this is generator
[Note] The difference of initializer and generator is that initializer is used to initialize the Tensor, and generator generates a new Tensor.
conda install
[Currently Linux only]
without CUDA
* python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx
with CUDA
* python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx_cuda
Some snippets:
Storage
* Memory container with GPU/CPU support.
maintain type conversions (type casting btwn Storages)
and moving btwn devices.
* Generic type object, the behavior is very similar to python.
Storage A(400,Type.Double);
for(int i=0;i<400;i++)
A.at<double>(i) = i;
Storage B = A; // A and B share same memory, this is similar as python
Storage C = A.to(Device.cuda+0);
Tensor
* A tensor, API very similar to numpy and pytorch.
* simple moving btwn CPU and GPU:
Tensor A({3,4},Type.Double,Device.cpu); // create tensor on CPU (default)
Tensor B({3,4},Type.Double,Device.cuda+0); // create tensor on GPU with gpu-id=0
Tensor C = B; // C and B share same memory.
// move A to gpu
Tensor D = A.to(Device.cuda+0);
// inplace move A to gpu
A.to_(Device.cuda+0);
* Type conversion in between avaliable:
Tensor A({3,4},Type.Double);
Tensor B = A.astype(Type.Uint64); // cast double to uint64_t
* vitual swap and permute. All the permute and swap will not change the underlying memory
* Use Contiguous() when needed to actual moving the memory layout.
Tensor A({3,4,5,2},Type.Double);
A.permute_(0,3,1,2); // this will not change the memory, only the shape info is changed.
cout << A.is_contiguous() << endl; // this will be false!
A.contiguous_(); // call Configuous() to actually move the memory.
cout << A.is_contiguous() << endl; // this will be true!
* access single element using .at
Tensor A({3,4,5},Type.Double);
double val = A.at<double>(0,2,2);
* access elements with python slices similarity:
typedef Accessor ac;
Tensor A({3,4,5},Type.Double);
Tensor out = A(0,":","1:4");
// equivalent to python: out = A[0,:,1:4]
Fast Examples
See test.cpp for using C++ .
See test.py for using python