Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Baseline #1

Open
pursueorigin opened this issue Oct 21, 2020 · 6 comments
Open

Baseline #1

pursueorigin opened this issue Oct 21, 2020 · 6 comments

Comments

@pursueorigin
Copy link

Hi,

Your work is really impressive! I have a question about baseline methods. Do you implement them by yourself or use the public code? To my knowledge, different baseline methods use different GCN module. If you implement them by yourself, could you please share your implementation? Thanks!

@limaosen0
Copy link
Owner

limaosen0 commented Oct 22, 2020

Hi,

Your work is really impressive! I have a question about baseline methods. Do you implement them by yourself or use the public code? To my knowledge, different baseline methods use different GCN module. If you implement them by yourself, could you please share your implementation? Thanks!

Hi, thanks for your question.

For ECC, DGCNN, DiffPool, we use the code reported by the work of F. Errica et al. A Fair Comparison of Graph Neural Networks for Graph Classification (ICLR 2020), see their code: https://github.com/diningphil/gnn-comparison.

For Graph UNet, SAGPool, and AttPool, we use their official code, but we unify their dataset splits and model selection strategies, which are the same as F. Errica et al and our method. For example, Graph UNet split the data by using a random split of sklearn instead of using the default dataset split, and SAGPool runs many different splits to obtain average recognition results. Here we use the default data splits.

@pursueorigin
Copy link
Author

Thanks!

What do you think about the conclusion of this paper? https://arxiv.org/abs/2010.11418
They argue that the popular pooling layer, such as diffpool, does not work. How is your pooling layer? Thanks!

@pursueorigin
Copy link
Author

pursueorigin commented Oct 24, 2020

Hi,

One more question, it seems the evaluation strategy in this paper is different from that of F. Errica diningphil/gnn-comparison#4

In this paper, as shown in Line 258 and 265, the best test_acc is stored for each fold. Then, the average test_acc of ten folds is reported. Is it correct?

As shown in diningphil/gnn-comparison#4
, F. Errica uses valid_acc to select the best model and then reports test_acc of this best model on the testing set.

Additionally, how did you handle Graph U-net in your experiment? With your strategy or F. Errica's strategy?

Look forward to hearing from you.

Thanks!

@limaosen0
Copy link
Owner

limaosen0 commented Oct 24, 2020

Thanks!

What do you think about the conclusion of this paper? https://arxiv.org/abs/2010.11418
They argue that the popular pooling layer, such as diffpool, does not work. How is your pooling layer? Thanks!

Hi, in our opinion, the proposed pooling operation and multiscale network architecture together improve the task to some extent. According to our ablation study of different variants of GXN, we present the results to verify it. For our pooling layer, our intuition and expectation are to abstract a graph that preserves sufficient and important information. Combing with our network which extracts rich multiscale features, an informative pooling could improve the performance.

Sorry that I still do not read https://arxiv.org/abs/2010.11418 in time, but as for our methods, we design two experiments to verify our pooling from different perspectives (see Appendix). One is that, if we let the selected vertices as labeled samples for semi-supervised node classification (on social networks), we could obtain high accuracy, indicating that we select the key nodes; The other experiment is that we perform graph pooling on smooth mesh graphs, the results are reasonable.

The question about the experimental setting is good. We use the test set as the validation set, and we do not notice these details in GNN-comparison, sorry for that. You can change the input data of our codes; As for Graph Unet, we also use the test set as the validation set; that is, the same as ours.

@pursueorigin
Copy link
Author

Thanks!

@veophi
Copy link

veophi commented Jan 15, 2021

Thanks!
What do you think about the conclusion of this paper? https://arxiv.org/abs/2010.11418
They argue that the popular pooling layer, such as diffpool, does not work. How is your pooling layer? Thanks!

Hi, in our opinion, the proposed pooling operation and multiscale network architecture together improve the task to some extent. According to our ablation study of different variants of GXN, we present the results to verify it. For our pooling layer, our intuition and expectation are to abstract a graph that preserves sufficient and important information. Combing with our network which extracts rich multiscale features, an informative pooling could improve the performance.

Sorry that I still do not read https://arxiv.org/abs/2010.11418 in time, but as for our methods, we design two experiments to verify our pooling from different perspectives (see Appendix). One is that, if we let the selected vertices as labeled samples for semi-supervised node classification (on social networks), we could obtain high accuracy, indicating that we select the key nodes; The other experiment is that we perform graph pooling on smooth mesh graphs, the results are reasonable.

The question about the experimental setting is good. We use the test set as the validation set, and we do not notice these details in GNN-comparison, sorry for that. You can change the input data of our codes; As for Graph Unet, we also use the test set as the validation set; that is, the same as ours.

Could I regard your results as the average of the best validation results in each fold?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants