Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse Direct Solver implemented in GraphBLAS language? #132

Open
learning-chip opened this issue Jun 23, 2022 · 2 comments
Open

Sparse Direct Solver implemented in GraphBLAS language? #132

learning-chip opened this issue Jun 23, 2022 · 2 comments
Labels
enhancement New feature or request

Comments

@learning-chip
Copy link

learning-chip commented Jun 23, 2022

Sparse matrix factorization heavily relies on combinatory graph algorithms. For example see the section "Bipartite matching for sparse solvers" and "Graph partitioning" (for fill-reducing ordering) in the EXAGRAPH paper. Many of those algorithms should be expressible in GraphBLAS-like language, and get optimized and parallelized by the underlying linear algebra primitives.

However, all existing sparse solvers like MUMPS and SuperLU use vertex-based programming. This is expected since these solvers started around 1990s~2000s, much earlier than the GraphBLAS effort. In fact, SuiteSparse has GraphBLAS and the sparse solvers UMFPACK/CHOLMOD as separate, independent subpackages.

In a course web I notice a project like that:

Use the Graph BLAS(as implemented in SuiteSparse) to implement one or more of the graph algorithms used in sparse Cholesky factorization, for example:

  • Symbolic factorization (given a graph, compute the filled graph).
  • Symmetric elimination orders: minimum degree, nested dissection, reverse Cuthill-McKee.
  • Elimination tree (given a graph, compute the elimination tree without computing the filled graph).

but without follow-up.

Just like the iterative solver issue (#131), I find this idea quite interesting but cannot see any existing work/literature on this. Given that even DFS has been expressed in GraphBLAS, there could be many unexplored topics in "translating vertex-based algorithm to GraphBLAS" I think? Is it worth exploring for sparse matrix factorization algorithms?

@DrTimothyAldenDavis
Copy link
Member

My goal for SuiteSparse is to add a layer that connects GraphBLAS with UMFPACK, CHOLMOD, CSparse, KLU+BTF, AMD, COLAMD, SuiteSparseQR, etc, so that these sparse A*x=b solvers can be used with GraphBLAS input matrices. The only limitation is a bit of software engineering to get the API designed correctly, and the time to do it. There's no fundamental limitation.

Writing a sparse direct solver on top of GraphBLAS would be much more challenging.

Performing DFS in GraphBLAS+LAGraph would be great -- I'm hoping that the authors of that paper could contribute a GraphBLAS-based code to LAGraph. If that were available then a Gilbert-Peierls sparse left-looking LU could be built on top of it. Solving x=L\b or x=U\b for lower/upper triangular systems would also be a challenge. Perhaps those should be considered as candidates for a future GxB_* method.

Having written several min degree ordering methods (AMD, COLAMD, CAMD, CCOLAMD), minimum degree would be quite challenging to do in GraphBLAS. They are fundamentally sequential, and GraphBLAS is best for bulk-style parallelism, where the parallelism can be expressed as large matrix/matrix or matrix/vector computations.

It would be interesting to consider a GraphBLAS operation that would solve A*x=b over a field. Then it could do all the work inside, including AMD ordering, etc, without being constrained to use just the GraphBLAS API. It's on my TODO list, or at least it is something I've thought about doing in the future sometime.

@DrTimothyAldenDavis DrTimothyAldenDavis added the enhancement New feature or request label Jun 24, 2022
@learning-chip
Copy link
Author

learning-chip commented Jun 25, 2022

Having written several min degree ordering methods (AMD, COLAMD, CAMD, CCOLAMD), minimum degree would be quite challenging to do in GraphBLAS. They are fundamentally sequential, and GraphBLAS is best for bulk-style parallelism

Thanks for the comment. This makes intuitive sense.

Maybe a better question to ask is -- what components in a sparse direct solver are suitable for (and can benefit from) GraphBLAS-style expression? For example, the nested dissection reordering in ParMETIS seems a better candidate, compared to the sequential AMD reordering? Maybe also the spectral nested dissection ?

Solving x=L\b or x=U\b for lower/upper triangular systems would also be a challenge. Perhaps those should be considered as candidates for a future GxB_* method.

It would be interesting to consider a GraphBLAS operation that would solve A*x=b over a field. Then it could do all the work inside, including AMD ordering, etc, without being constrained to use just the GraphBLAS API.

If I understand correctly -- so this means treating matrix factorization and triangular solve as operator-level code, not algorithm-level code. Leave the responsibility to each GraphBLAS implementation. Internally it may just wrapping an existing solver like SuperLU, which is viewed as a black-box by the user anyways.

This kind of makes sense, as the triangular solve operation L^{-1}x is just a linear mapping A*x from an abstract view, and share the same algebraic properties (commutative, associative...) as general mxv.

It is also useful for #131 because the coarsest level of AMG is solved by a direct factorization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants