-
Notifications
You must be signed in to change notification settings - Fork 390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I feed different graph for each data? #5
Comments
You're welcome ! Thanks for considering it. :) Yep the current implementation assumes the graph is fixed, i.e. the same graph Laplacian is used across all samples / signals. You can however modify the implementation to feed a different graph (i.e. a different Laplacian computed from your edge weights) for each sample. In that case you make the assumption that the learned filters are transferable from one graph to the other, which only makes sense if your various graphs share similar properties. Cheers, Michaël |
Hi, I am also trying to achieve something similar to the @hello072's comment. I want to classify/regress arbitrary graphs, where the signal may or may not be relevant (so purely based on their adjacency matrices). I could be wrong, but it does not look like a trivial generalization since your models are defined with a single Laplacian, is that right? How would you suggest is the most efficient way to achieve this? Thank you for the python notebooks explaining how to use your code, it really helps! edit: mean to say "does not look like a trivial generalization |
Thanks for your interest. To take the structure into account, you can add a constant signal on the graph (while keeping or not the feature signals). Then:
|
Hi mdeff, I am also thinking to do graph classification as rushil and hello. I see you mentioned: "If the graphs are all of the same size, you only need to feed a different Laplacian for each signal. The inferred class of the signal is then the predicted class of the graph." How do you do this in your current code framework? Is this something straightforward? And in your usage notebook, you actually mentioned this task: Another problem of interest is whole graph classification, with or without signals on top. We'll call that third regime graph classification / regression. Do you have an example of how this works? Another question I have is my graph signal on each node is not a single scalar. Suppose all my graphs has 10 nodes, but each node have a signal vector of length 100, then my data matrix will be (#Samples, 10 (nodes), 100) as opposed to your current input format which is (#Samples, #Nodes). Does your current code support vector inputs? You implied this could be done in the "whole graph classification" statement. Many thanks for your work! |
Hi, thanks for your interest. The current code was not developed with this application in mind, so I don't have any example ready. It might in the future, but for now you'd have to adapt the code in the Having multiple features by node is akin to have multiple feature maps and is thus supported by the filtering function |
Hello mdeff, PS. Both the paper and the code were a great read |
Hi halwai, thanks for your interest. Your reasoning seems right. At the input and hidden layers you should not need to care which node in a graph corresponds to which node in the other, as feature maps are convolved with filters which result only depends on the K-neighborhood of a node. The only time you care is when passing the features (which reside on the most coarsened graph) to the fully connected layers. I see two solutions:
Hope it helps. Cheers, Michaël |
hi, modeff Can you gives me some advice on how I can change the code(which functions) to meet my requirement? |
Hi @anthony123, thanks for your interest. In such a case you'll have to normalize the size of your data at some point. Graphs of different sizes are not a problem for graph convolutions, but it will be for the fully connected layer (I assume you're interested in classifying whole graphs). In such a case you probably want an attention mechanism as discussed above. Having features of different dimensionality per node is more problematic. For graph convolutions you'll have to normalize them to a fixed number of feature maps as a pre-processing step (it's like having an image with a different number of colour channels per pixel). How to do this depends on your data. You may e.g. pad the features or summarize them with some statistics. |
Hello @mdeff , @anthony123 , @fate3439 . To carry on the discussion about nodes with vectors of length 100 rather than a scalar. Given a dataset of 1000 samples (n_samples), each sample has 10 nodes (number of nodes) and each node has a vector of 100 values. So dataset_dimension=(1000,10,100). For each sample l compute its laplacian which is of dimension=(100,100) and then feed to a convolutional layer. Let
Hence 10 convolution per sample. So, if l have a graph of Thank you |
Thanks you for share this implementation.
Suppose each data has different graph structure...(same node number but different edges and edge weights)
In this case... can I feed different graph for each data? this implementation looks like that graph structure is fixed (grid or random...) for datasets
Thanks.
regards, Sangyun Lee
The text was updated successfully, but these errors were encountered: