-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse Direct Solver implemented in GraphBLAS language? #132
Comments
My goal for SuiteSparse is to add a layer that connects GraphBLAS with UMFPACK, CHOLMOD, CSparse, KLU+BTF, AMD, COLAMD, SuiteSparseQR, etc, so that these sparse A*x=b solvers can be used with GraphBLAS input matrices. The only limitation is a bit of software engineering to get the API designed correctly, and the time to do it. There's no fundamental limitation. Writing a sparse direct solver on top of GraphBLAS would be much more challenging. Performing DFS in GraphBLAS+LAGraph would be great -- I'm hoping that the authors of that paper could contribute a GraphBLAS-based code to LAGraph. If that were available then a Gilbert-Peierls sparse left-looking LU could be built on top of it. Solving x=L\b or x=U\b for lower/upper triangular systems would also be a challenge. Perhaps those should be considered as candidates for a future GxB_* method. Having written several min degree ordering methods (AMD, COLAMD, CAMD, CCOLAMD), minimum degree would be quite challenging to do in GraphBLAS. They are fundamentally sequential, and GraphBLAS is best for bulk-style parallelism, where the parallelism can be expressed as large matrix/matrix or matrix/vector computations. It would be interesting to consider a GraphBLAS operation that would solve A*x=b over a field. Then it could do all the work inside, including AMD ordering, etc, without being constrained to use just the GraphBLAS API. It's on my TODO list, or at least it is something I've thought about doing in the future sometime. |
Thanks for the comment. This makes intuitive sense. Maybe a better question to ask is -- what components in a sparse direct solver are suitable for (and can benefit from) GraphBLAS-style expression? For example, the nested dissection reordering in ParMETIS seems a better candidate, compared to the sequential AMD reordering? Maybe also the spectral nested dissection ?
If I understand correctly -- so this means treating matrix factorization and triangular solve as operator-level code, not algorithm-level code. Leave the responsibility to each GraphBLAS implementation. Internally it may just wrapping an existing solver like SuperLU, which is viewed as a black-box by the user anyways. This kind of makes sense, as the triangular solve operation It is also useful for #131 because the coarsest level of AMG is solved by a direct factorization. |
Sparse matrix factorization heavily relies on combinatory graph algorithms. For example see the section "Bipartite matching for sparse solvers" and "Graph partitioning" (for fill-reducing ordering) in the EXAGRAPH paper. Many of those algorithms should be expressible in GraphBLAS-like language, and get optimized and parallelized by the underlying linear algebra primitives.
However, all existing sparse solvers like MUMPS and SuperLU use vertex-based programming. This is expected since these solvers started around 1990s~2000s, much earlier than the GraphBLAS effort. In fact, SuiteSparse has GraphBLAS and the sparse solvers UMFPACK/CHOLMOD as separate, independent subpackages.
In a course web I notice a project like that:
but without follow-up.
Just like the iterative solver issue (#131), I find this idea quite interesting but cannot see any existing work/literature on this. Given that even DFS has been expressed in GraphBLAS, there could be many unexplored topics in "translating vertex-based algorithm to GraphBLAS" I think? Is it worth exploring for sparse matrix factorization algorithms?
The text was updated successfully, but these errors were encountered: