Skip to content

Commit

Permalink
mostly faq and limitations
Browse files Browse the repository at this point in the history
  • Loading branch information
rileyjmurray committed Aug 25, 2024
1 parent 8706f06 commit 3d3aff0
Show file tree
Hide file tree
Showing 8 changed files with 171 additions and 162 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
**/__pycache__/*
*_ignore*

# vim
*.sw*
Expand Down
46 changes: 6 additions & 40 deletions RandBLAS/base.hh
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,7 @@ enum class MajorAxis : char {
Undefined = 'U'
};


#ifdef __cpp_concepts
// =============================================================================
/// @verbatim embed:rst:leading-slashes
Expand All @@ -295,16 +296,16 @@ enum class MajorAxis : char {
/// :nowrap:
///
/// \begin{gather}
/// \alpha^2 \cdot \mathbb{E}\left[ \mtxS^T\mtxS \right]=\mathbf{I}_{c \times c}& \nonumber \\
/// \,\beta^2 \cdot \mathbb{E}\left[ \mtxS{\mtxS}^T\, \right]=\mathbf{I}_{r \times r}& \nonumber
/// \theta^2 \cdot \mathbb{E}\left[ \mtxS^T\mtxS \right]=\mathbf{I}_{c \times c}& \nonumber \\
/// \,\phi^2 \cdot \mathbb{E}\left[ \mtxS{\mtxS}^T\, \right]=\mathbf{I}_{r \times r}& \nonumber
/// \end{gather}
///
/// hold for some :math:`\alpha > 0` and :math:`\beta > 0`.
/// hold for some :math:`\theta > 0` and :math:`\phi > 0`.
///
/// The *isometry scale* of the distribution
/// is :math:`\gamma := \alpha` if :math:`c \geq r` and :math:`\gamma := \beta` otherwise. If you want to
/// is :math:`\alpha := \theta` if :math:`c \geq r` and :math:`\alpha := \phi` otherwise. If you want to
/// sketch in a way that preserves squared norms in expectation, then you should sketch with
/// a scaled sample :math:`\gamma \mtxS` rather than the sample itself.
/// a scaled sample :math:`\alpha \mtxS` rather than the sample itself.
///
/// **Programmatic description**
///
Expand Down Expand Up @@ -343,44 +344,9 @@ concept SketchingDistribution = requires(SkDist D) {
{ D.isometry_scale } -> std::same_as<const double&>;
};
#else
/// .. math::
///
/// {\D}\texttt{.isometry\_scale} = \begin{cases} \alpha &\text{ if } r \leq c \\ \beta &\text{ if } r > c \end{cases}~.
#define SketchingDistribution typename
#endif

// =============================================================================
/// \fn isometry_scale_factor(SkDist D)
/// @verbatim embed:rst:leading-slashes
///
/// All sketching distributions in RandBLAS satisfy the following properties:
/// if :math:`\D` is a distribution over :math:`r \times c` matrices and
/// :math:`\mtxS` is a sample from :math:`\D`, then
/// (1) :math:`\mathbb{E}\mtxS = \mathbf{0}_{r \times c}` and (2)
///
/// .. math::
/// :nowrap:
///
/// \begin{gather*}
/// \alpha^2 \cdot \mathbb{E}\left[ \mtxS^T\mtxS \right]=\mathbf{I}_{c \times c}& \\
/// \,\beta^2 \cdot \mathbb{E}\left[ \mtxS{\mtxS}^T\, \right]=\mathbf{I}_{r \times r}&
/// \end{gather*}
///
/// hold for some :math:`\alpha > 0` and :math:`\beta > 0`.
///
/// This function returns
///
/// .. math::
///
/// \gamma = \begin{cases} \alpha &\text{ if } r \leq c \\ \beta &\text{ if } r > c \end{cases}~.
///
/// If you want to sketch in a way that preserves squared norms in expectation, then you
/// should sketch with a scaled sample :math:`\gamma \mtxS` rather than the sample itself.
///
/// @endverbatim
template <typename T, SketchingDistribution SkDist>
inline T isometry_scale_factor(SkDist D);


#ifdef __cpp_concepts
// =============================================================================
Expand Down
148 changes: 148 additions & 0 deletions rtd/source/FAQ.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
FAQ and Limitations
==============================


(In)frequently asked questions about our design decisions
------------------------------------------------------------

Why the name RandBLAS?
RandBLAS derives its name from BLAS: the basic linear algebra subprograms. Its name evokes the *purpose* of BLAS rather
than the acronym. Specifically, the purpose of RandBLAS is to provide high-performance and reliable functionality that
can be used to build sophisticated randomized algorithms for matrix computations.

The RandBLAS API is also as close as possible to the BLAS API while remaining polymorphic. Some may find this
decision dubious, since the BLAS API is notoriously cumbersome for new users. We believe that these downsides
are made up for by the flexibilty and portability of a BLAS-like API. It is also our hope that popular high-level
libraries that already wrap BLAS might find it straightforward to define similar wrappers for RandBLAS.

DenseDist and SparseDist are very simple immutable structs. Why bother having constructors for these classes, when you could just use initialization lists?
DenseDist and SparseDist have common members and type-specific members.
We wanted the common members to appear earlier initialization order, since that makes it easier to
preserve RandBLAS' polymorphic API when called from other languages.

Two important common members are major_axis and isometry_scale. Users rarely need to think about the former (since there are
sensible defaults for DenseDist and SparseDist) and they never need to make decisions for the latter
What's more, there are type-specific members that the user should be mindful of, and would be ill-suited to the trailing
positions in an initialization list.

Using constructors therefore has three benefits.

1. We can put the type-specific mebers earlier in the argument list,
2. We can include the default value for major_axis as a trailing constructor argument, and
3. We can ensure that isometry_scale is always set correctly.

Why don't you automatically scale your sketching operators to give partial isometries in expectation?
There are a few factors that led us to this decision. None of these factors is a huge deal, alone, but they become significant when considered together.

1. Many randomized algorithms are actually invariant under rescaling of the sketching operators that they use internally.
2. Sketching operators are easier to describe if we think in terms of their "natural" scales before considering their use as tools for dimension reduction.
For example, the natural scale for DenseSkOps is to have entries that are sampled iid from a mean-zero and variance-one distribution.
The natural scale for SparseSkOps (with major_axis==Short) is for nonzero entries to have absolute value equal to one.
Describing these operators in their isometric scales would require that we specify dimensions, which dimension is larger than the other,
and possibly additional tuning parameters (like vec_nnz for SparseSkOps).
3. It's easier for us to describe how implicit concatenation of sketching operators works (see :ref:`this part <sketch_updates>` of our tutorial)
if using the isometry scale is optional, and off by default.

Why are all dimensions 64-bit integers?
RandNLA is interesting for large matrices. It would be too easy to have an index overflow if we allowed 32-bit indexing.
We do allow 32-bit indexing for buffers underlying sparse matrix datastructures, but we recommend sticking with 64-bit.

I looked at the source code and found weird function names like "lskge3," "rskges," and "lsksp3." What's that about?
Makes it easier to call from languages that don't support overloads.
Short and specific names make it possible to communicate efficiently and precisely (useful for test code and developer documentation).

Why does sketch_symmetric not use a "side" argument, like symm in BLAS?
There are many BLAS functions that include a "side" argument. This argument always refers to the argument with the most notable structure.
In symm, the more structured argument is the symmetric matrix.
In RandBLAS, the more structed argument is always the sketching operator. Given this, we saw three options to move forward.

1. Keep "side," and have it refer to the position of the symmetric matrix. This is superficially simmilar to the underlying BLAS convention.
2. Keep "side," and have it refer to the position of the sketching operator. This is similar to BLAS at a deeper level, but people could
easily use it incorrectly.
3. Dispense with "side" altogether.

We chose the third option since that's more in line with modern APIs for BLAS-like functionality (namely, std::linalg).


(In)frequently asked questions about RandBLAS' capabilities
-----------------------------------------------------------

How do I call RandBLAS from other languages?
First, this depends on whether your language supports overloads.

* To get an important thing out of the way: we use both formal C++ overloads and C++ templates. The latter mechanism might as well constitute an overload from the perspective of other languages.
* We do have canonical function names to address overloads in (1) the side of an operand in matrix-matrix products and (2) the family of sketching distribution.
* Our templating of numerical precision should be resolved by prepending "s" for single precision or "d" for double precision on any classes and function names.

This also depends on whether your language supports classes that can manage their own memory.

* The full API for DenseSkOp and SparseSkOp requires letting them manage their own memory. If you use the appropriate constructors then they'll let you manage all memory.

Can functions of the form ``sketch_[xxx]`` do something other than sketching?
Absolutely. It can do lifting, which is needed in some algorithms. It can also apply a square submatrix of a sketching operator (useful for distributed applications), in which case the output matrix isn't any smaller than the input matrix.

Can I sketch a const symmetric matrix that's only stored in an upper or lower triangle?
Yes, but there are caveats. First, you can only use dense sketching operators. Second, you'll have to call fill_dense(layout, dist, … buff …, rngstate) and then use buff in your own SYMM function.


Unavoidable limitations
------------------------

No complex domain support:
BLAS' support for this is incomplete. You can't mix real and complex, you can't conjugate without transposing, etc…
We plan on revisiting the question of complex data in RandBLAS a few years from now.

No support for sparse-times-sparse (aka SpGEMM):
This will probably "always" be the case, since we think it's valuable to keep RandBLAS' scope limited.

No support for subrampled randomized trig transforms (SRFT, SRHT, SRCT, etc...):
We'd happily accept a contribution of a randomized Hadamard transform (without subsampling)
that implicitly zero-pads inputs when needed. Given such a function we could figure out
how we'd like to build sketching operators on top of it.

No support for DenseSkOps with Rademachers:
We'd probably need support for mixed-precision arithmetic to fully realize the advantage of
Rademacher over uniform [-1,1]. It's not clear to me how we'd go about doing that. There
*is* the possibility of generating Rademachers far faster than uniform [-1, 1]. The implementation
of this method might be a little complicated.


Limitations of calling RandBLAS from other languages
----------------------------------------------------

We routinely use function overloading, and that reduces portability across languages.
See below for details on where we stand and where we plan to go to resolve this shortcoming.

We have a consistent naming convention for functions that involve sketching operators
* [L/R] are prefixes used when we need to consider left and right-multiplication.
* The characters "sk" appearing at the start of a name or after [L/R] indicates that a function involves taking a product with a sketching operator.
* Two characters are used to indicate the structure of the data in the sketching operatation.
The options for the characters are {ge, sy, ve, sp}, which stand for general, *explicitly* symmetric, vector, and sparse (respectively).
* A single-character [X] suffix is used to indicate the structure of the sketching operator. The characters are "3" (for dense sketching
operators, which would traditionally be applied with BLAS 3 function) and "s" (for sparse sketching operators).

Functions that implement the overload-free conventions
* [L/R]skge[X] for sketching a general matrix from the left (L) or right (R) with a matrix whose structure is indicated by [X].
C++ code should prefer overloaded sketch_general
* [L/R]sksp3 for sketching a sparse matrix from the left (L) (L) or right (R) with a DenseSkOp.
C++ code should prefer overloaded sketch_sparse, unless operating on a submatrix of a COO-format sparse data matrix is needed.

Functions that are missing implementations of this convention
* [L/R]skve[X] for sketching vectors. This functionality is availabile in C++ with sketch_vector
* [L/R]sksy[X] for sketching a matrix with *explicit symmetry*. This functionality is availabile in C++ with sketch_symmetric.

Some discussion

Our templating for numerical precision should be resolved by prepending "d" for double precision or "s" for single precision

RandBLAS requires a consistent naming convention across an API that supports multiple structured operands (e.g., sketching sparse data),
while conventions in the BLAS API only need to work when one operand is structured.
This is why our consistent naming convention might appear "less BLAS-like" than it could be.

All of these overload-free function names have explicit row and column offset parameters to handle submatrices of linear operators.
However, the overloaded versions of these functions have *additional* overloads based on setting the offset parameters to zero.


We have no plans for consistent naming of overload-free sparse BLAS functions. The most we do in this regard is offer functions
called [left/right]_spmm for SpMM where the sparse matrix operand appears on the left or on the right.

14 changes: 9 additions & 5 deletions rtd/source/api_reference/skops_and_dists.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,21 +21,25 @@
Fundamentals
********************************************************************

.. very similar effects can be acheived in C, Fortran, or Julia. While there are
.. very similar effects can be achieved in C, Fortran, or Julia. While there are
.. certainly some popular programming languages that don't support this kind of API
.. (e.g., MATLAB, Python, and R), accessing RandBLAS from these languages should
.. be mediated operator-overloaded objects in a way that's analogous to how one
.. would access BLAS.
RandBLAS has a polymorphic free-function API. We have spent a significant amount of
effort on minimizing the number of RandBLAS-specific datastructures needed in order
to acheive the polymorphic API.
to achieve that polymorphism.

RandBLAS is very light on C++ idioms. The main C++ idioms we use are
RandBLAS is very light on C++ idioms. The main idioms we use are
templating and function overloading, plus some mild memory management
with destructor methods for structs.
with destructor methods for structs. The only place we even use inheritance is
in our test code!

:ref:`Test of sketch updates <sketch_updates>`.
RandBLAS has a bunch of functions that aren't documented on this website.
If such a function looks useful, you should feel free to use it. If you
end up doing that and you care about your code's compatibility with future
versions of RandBLAS, then please let us know by filing a quick GitHub issue.


Abstractions
Expand Down
2 changes: 1 addition & 1 deletion rtd/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,4 +97,4 @@
# numfig = True
math_numfig = True
math_eqref_format = "Eq. {number}" # use a non-breaking-space unicode character.
numfig_secnum_depth = 1
numfig_secnum_depth = 1
1 change: 1 addition & 0 deletions rtd/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
Tutorial <tutorial/index>
API Reference <api_reference/index>
Changelog <updates/index>
FAQ and Limitations <FAQ>

.. default-domain:: cpp

Expand Down
7 changes: 5 additions & 2 deletions rtd/source/installation/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ RandBLAS is most useful when called from programs that can access LAPACK,
or an equivalent library for dense matrix computations. However, we don't
require that such a library is available.

RandBLAS uses `C++20 concepts <https://en.cppreference.com/w/cpp/language/constraints>`_.
Make sure your compiler supports these!

CMake users
-----------
Expand All @@ -35,6 +33,11 @@ Check out our `examples <https://github.com/BallisticLA/RandBLAS/tree/main/examp
for CMake projects that use RandBLAS and `LAPACK++ <https://github.com/icl-utk-edu/lapackpp>`_
to implement high-level randomized algorithms.

.. warning::

Make sure to use the flag ``-Dblas_int=int64`` in the CMake configuration line for BLAS++
If you don't do that then you might get int32, which can lead to issues for large matrices.

Everyone else
-------------
Strictly speaking, we only need three things to use RandBLAS in other projects.
Expand Down
Loading

0 comments on commit 3d3aff0

Please sign in to comment.