Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accelerate DirectLiNGAM by parallelising causal ordering on GPUs with CUDA #169

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

aknvictor
Copy link

This PR includes the implementation drastically speed-up (up to 32x on consumer GPU) DirectLiNGAM and its variants e.g VarLiNGAM.

The details are to allow for an optional dependency: https://github.com/Viktour19/culingam which implements custom CUDA kernels for the pairwise likelihood ratio causal ordering method.

The implementation has been tested locally on an NVIDIA RTX 6000 on a Linux machine - but tests on other setups are needed.

@kunwuz
Copy link
Collaborator

kunwuz commented Mar 3, 2024

Thanks, Victor. It looks great!

  • To make our dependencies as simple as possible, would it be possible to directly incorporate your modification into the causal-learn codebase?
  • Since the code of LiNGAM-based methods is the same as that in the LiNGAM package, it seems that some correctness issues are lingering in the PR there? (thanks @ikeuchi-screen for the review)

@aknvictor
Copy link
Author

Hi Yujia:

Directly incorporating it will introduce CUDA dependencies that are not needed for other algorithms, and potentially make the installation of causal-learn more complex. While it's possible to do so I'm not sure its the best option. Yes, the discussion in that PR are relevant so we may want to wait for that to be resolved before proceeding with this PR although since the issues are related to variance in setup - it may be useful for you to also test on your own setup as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants