-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GranDAG algorithm #144
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: eeulig <[email protected]>
Signed-off-by: eeulig <[email protected]>
Signed-off-by: eeulig <[email protected]>
Hi @eeulig wow thanks for this PR that covers an important part of causal discovery! This is as I understand a non-linear extension of NoTears right? This might go through a number of iterations to get the code to a mergable state. This is mainly for maintainability, code robustness and understanding. For example see the PR for topological methods #129. If you're new to contributing to open source, this will also be a great learning experience in improving your scientific coding chops! Welcome to the community :) I'll take some time over the next week or so to review it. In the meantime, WDYT about adding an example or a few examples that demonstrate how to use these methods and when these methods are useful? These are useful to guide users that are not familiar with the methods (e.g. me too :)) You can look at the existing examples/ directory for the format and high level layout. Summary of what we'll work towards:
Lmk if this sounds good to you? |
@@ -49,6 +49,7 @@ importlib-resources = { version = "*", python = "<3.10" } | |||
pywhy-graphs = { git = "https://github.com/py-why/pywhy-graphs.git", branch = 'main', optional = true } | |||
pygraphviz = { version = "^1.11", optional = true } | |||
pygam = "^0.9.0" | |||
torch = "^2.0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are happy to have a torch dependency
return expm_input.t() * grad_output | ||
|
||
|
||
def is_acyclic(adjacency: torch.Tensor) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adam2392 maybe the acyclicity check on adjacency matrix should go to pywhy-graph?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is already implemented there
Thanks a lot @adam2392 and @robertness ! I'm happy to look into providing an example similar to the ones in I noticed that some checks were not successful, because some gpu related libraries ( |
Is this something that could be solved with code that sets the device to cpu unless cuda is available? |
Great! I will try to find some time next week to start taking a look at this. Since this is an "extension" of notears, I want to see if it is possible to design the code in such a way to allow notears to be implemented modularly in a future PR.
Yeah I think testing GPU on CIs is notoroiously difficult, so we can force the CPU version and the tests and examples should be super small-scale for the sake of just allowing it to run. |
Changes proposed in this pull request:
This PR adds the following causal discovery method: Gradient-Based Neural DAG Learning, Lachapelle et al., 2020.
The code is heavily based on the author's implementation available on GitHub (licensed under MIT license). I noted this in the docstring of the main class and added a remark for each function directly taken from above repository.
I also compared the SHD on ER graphs to the results reported in the paper (Tab. 1, 2, 8, 9) for the first 10 datasets provided by the authors here:
Overall, results seem to be consistent, and I assume most differences can be attributed to differences in the different dataset samples.
Before submitting
section of the
CONTRIBUTING
docs.Writing docstrings section of the
CONTRIBUTING
docs.After submitting