Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFactor: bulkerify GEMV (both MC and GPU) #1214

Open
wants to merge 52 commits into
base: master
Choose a base branch
from

Conversation

albestro
Copy link
Collaborator

@albestro albestro commented Nov 15, 2024

In this PR we try to increase parallelisation of GEMV step of TFactor, which is by construction serial (all results goes into the same block tile), by using workspaces for storing partial results and then reducing them before the final TRMV step.

Algorithmically, the main change is that stepGEMV loop has been replaced with a single stepGEMVAll, and the concept has been applied in a similar way for both backends, MC and GPU:

  • MC: pika::bulk splits input tiles over multiple task, each one stores the partial results in their workspace and just one at the end does the reduction.
  • GPU: similarly to the CPU, the work is forked over different pika tasks, each one computing a partial result on a different GPU stream. These tasks then join into a single task which performs the reduction.

In order to implement this solution, workspaces for intermediate results have been added (there is another option which does not require any additional workspace nor reduction, and that we might explore for MC in another PR).

TODO:

Close #798.

EDIT: since this is conceptually similar to #798 and it is going to be closed as soon as this gets merged, I migrated here the doc fixes happened there.

@albestro albestro added this to the Optimizations milestone Nov 15, 2024
@albestro albestro self-assigned this Nov 15, 2024
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 6558780 to bf17fcc Compare November 18, 2024 13:16
Base automatically changed from alby/tfactor-optim/no-gemv-divergence to master November 22, 2024 11:28
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch 4 times, most recently from 37ea75b to 2845339 Compare December 2, 2024 16:48
@albestro
Copy link
Collaborator Author

albestro commented Dec 2, 2024

cscs-ci run

just to check if there is any major problems

@albestro albestro marked this pull request as ready for review December 3, 2024 16:08
Copy link
Collaborator

@rasolca rasolca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
Please name new functions in snake case, and rename existing internal functions if possible.

Comment on lines 268 to 269
matrix::Matrix<T, Device::GPU> ws_T({nworkspaces * nrefls_step, nrefls_step},
{nrefls_step, nrefls_step});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All Ts are allocated at scheduling. Better reuse it.

@albestro albestro requested a review from rasolca December 5, 2024 11:22
Copy link
Collaborator

@msimberg msimberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a few questions, nothing blocking.

include/dlaf/eigensolver/bt_reduction_to_band/impl.h Outdated Show resolved Hide resolved
include/dlaf/factorization/qr/api.h Outdated Show resolved Hide resolved
include/dlaf/factorization/qr/t_factor_impl.h Outdated Show resolved Hide resolved
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 05dc48d to f83f317 Compare December 5, 2024 15:00
include/dlaf/tune.h Outdated Show resolved Hide resolved
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 0397c9e to d370894 Compare December 9, 2024 09:47
@albestro albestro requested a review from rasolca December 9, 2024 09:47
@albestro
Copy link
Collaborator Author

cscs-ci run

@albestro albestro marked this pull request as draft December 11, 2024 16:15
@albestro
Copy link
Collaborator Author

Converted to draft in order to prevent merging, since I'm still doing some checks.

include/dlaf/tune.h Outdated Show resolved Hide resolved
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 2f90f23 to d199b88 Compare December 20, 2024 15:14
@albestro
Copy link
Collaborator Author

albestro commented Jan 7, 2025

cscs-ci run

@albestro
Copy link
Collaborator Author

albestro commented Jan 7, 2025

Apart the fixes committed between the two last CI runs, which fix for sure some problems, one major difference between the CI runs is that the oldest one ran on Eiger while the latest ran on Daint (because of the rebase on master).

The oldest one on Eiger had some failures that are concerning, specifically all the CPU UNIT_RANK_6, which failed due to a timeout on Eigensolver/GenEigensolver MC Distributed test (example)

I will try to investigate why that's the case and why it haven't happened on Daint.

@albestro albestro mentioned this pull request Jan 8, 2025
2 tasks
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from bc153ce to 152f8e7 Compare January 21, 2025 16:40
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 152f8e7 to 0d2bd4f Compare January 21, 2025 16:43
@msimberg msimberg modified the milestones: Optimizations, v0.8.0 Jan 27, 2025
@msimberg msimberg marked this pull request as ready for review January 27, 2025 15:12
include/dlaf/eigensolver/bt_reduction_to_band/impl.h Outdated Show resolved Hide resolved
template <Backend backend, Device device, class T>
void computeTFactor(matrix::Panel<Coord::Col, T, device>& hh_panel,
matrix::ReadOnlyTileSender<T, Device::CPU> taus,
matrix::ReadWriteTileSender<T, device> t,
std::vector<matrix::ReadWriteTileSender<T, device>> workspaces,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very naive question: could you briefly explain why a vector of tile senders is more appropriate here instead of e.g. a round robin thing of tiles or a panel?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I get it correctly, #1214 (comment) is on topic. The main idea is that in this way the caller can manage the workspace by re-using it over multiple calls.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if I'm not getting the point: can the caller not manage workspaces if the tiles are in a panel?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answering my own question after a call with @albestro: It is using a round-robin panel for the tile senders, the senders are just accessed by the caller of computeTFactor. From the call we concluded that it may be worth passing the panel itself to computeTFactor and let it access the senders directly. This may be useful to avoid some splitTiles and unnecessary sender accesses that are currently happening in the caller(s) of computeTFactor. It remains to be seen if it's actually an improvement (in organization and/or performance). Thanks @albestro for the explanation and for looking into this!

include/dlaf/factorization/qr/t_factor_impl.h Outdated Show resolved Hide resolved
include/dlaf/factorization/qr/t_factor_impl.h Outdated Show resolved Hide resolved
include/dlaf/factorization/qr/t_factor_impl.h Outdated Show resolved Hide resolved
include/dlaf/factorization/qr/t_factor_impl.h Outdated Show resolved Hide resolved
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch 2 times, most recently from 6d6fb9f to 6f05890 Compare February 3, 2025 17:13
src/init.cpp Outdated Show resolved Hide resolved
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 6f05890 to 3749abc Compare February 4, 2025 17:15
@albestro albestro force-pushed the alby/tfactor-optim/bulk branch from 3749abc to ffba9a4 Compare February 5, 2025 09:28
@@ -1063,7 +1075,14 @@ Matrix<T, Device::CPU> ReductionToBand<B, D, T>::call(Matrix<T, D>& mat_a, const
// TODO probably the first one in any panel is ok?
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Random thought: just saw this comment and wondering if we should have a look and see if we can "merge" this workspace into the other one? @rasolca

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Review
Development

Successfully merging this pull request may close these issues.

4 participants