-
Notifications
You must be signed in to change notification settings - Fork 390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
laplacian diagonal elements for fake nodes #18
Comments
Here is our revised
|
Hi Maosi, Regarding definitions, you're correct that the diagonal of the combinatorial Laplacian The For this application it should not matter as the value of a fake node should be zero anyway. But I guess this reasoning is related to your fear of propagating the bias (#17) right? And killing the value should clear any doubt. |
Hi Michaël, Yes, we found that setting For my application (spatial interpolation on dynamic nodes), currently cnn_graph shows larger loss (~0.2) than the stacked bidirectional LSTM (~0.05) after they are stable. I think maybe the change of graph (i.e. locations and numbers of nodes) across samples is the cause? Because the weights trained in cnn_graph should be applied on a fixed graph? Thanks. |
Then it does not matter indeed. The method was clearly developed with a fixed graph in mind, but filters can definitely be applied to multiple graphs (especially if they are local, i.e. of low polynomial order). Maybe you are destroying too much information with coarsening / pooling. I'm not sure I understand why you are doing this, but I saw in your other issue you had 7 layers, with the last graphs being mostly disconnected. |
Hello Michaël,
My colleague told me when I use the
graph.laplacian
function, I have to assign 0.0 to the diagonal elements of theL
matrix on the fake nodes. The current function will return 1.0 on those elements.My colleague said on fake nodes
weights
W
= 0.0 sod
will be 0.0.d+=np.spacing(0.0)
will have a value very close to 0.0 (e.g. 4.9e-324).d = 1 / np.sqrt(d)
will have a large value (e.g. 4.5e+161).D*W*D
will be 0.0 (b/c 0.0 times a large number (but not infinity) is 0.0).L=I-D*W*D
will be 1.0.The original Laplacian definition
L=D-W
tell us the originalL
will have 0.0 on fake nodes (b/c D and W both are 0.0 on them). Normalizing (squashing) the originalL
byD
shouldn't change the value 0.0 to 1.0.Is my colleague right on this?
Thanks.
Maosi Chen
The text was updated successfully, but these errors were encountered: