-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Baseline #1
Comments
Hi, thanks for your question. For ECC, DGCNN, DiffPool, we use the code reported by the work of F. Errica et al. A Fair Comparison of Graph Neural Networks for Graph Classification (ICLR 2020), see their code: https://github.com/diningphil/gnn-comparison. For Graph UNet, SAGPool, and AttPool, we use their official code, but we unify their dataset splits and model selection strategies, which are the same as F. Errica et al and our method. For example, Graph UNet split the data by using a random split of sklearn instead of using the default dataset split, and SAGPool runs many different splits to obtain average recognition results. Here we use the default data splits. |
Thanks! What do you think about the conclusion of this paper? https://arxiv.org/abs/2010.11418 |
Hi, One more question, it seems the evaluation strategy in this paper is different from that of F. Errica diningphil/gnn-comparison#4 In this paper, as shown in Line 258 and 265, the best test_acc is stored for each fold. Then, the average test_acc of ten folds is reported. Is it correct? As shown in diningphil/gnn-comparison#4 Additionally, how did you handle Graph U-net in your experiment? With your strategy or F. Errica's strategy? Look forward to hearing from you. Thanks! |
Hi, in our opinion, the proposed pooling operation and multiscale network architecture together improve the task to some extent. According to our ablation study of different variants of GXN, we present the results to verify it. For our pooling layer, our intuition and expectation are to abstract a graph that preserves sufficient and important information. Combing with our network which extracts rich multiscale features, an informative pooling could improve the performance. Sorry that I still do not read https://arxiv.org/abs/2010.11418 in time, but as for our methods, we design two experiments to verify our pooling from different perspectives (see Appendix). One is that, if we let the selected vertices as labeled samples for semi-supervised node classification (on social networks), we could obtain high accuracy, indicating that we select the key nodes; The other experiment is that we perform graph pooling on smooth mesh graphs, the results are reasonable. The question about the experimental setting is good. We use the test set as the validation set, and we do not notice these details in GNN-comparison, sorry for that. You can change the input data of our codes; As for Graph Unet, we also use the test set as the validation set; that is, the same as ours. |
Thanks! |
Could I regard your results as the average of the best validation results in each fold? |
Hi,
Your work is really impressive! I have a question about baseline methods. Do you implement them by yourself or use the public code? To my knowledge, different baseline methods use different GCN module. If you implement them by yourself, could you please share your implementation? Thanks!
The text was updated successfully, but these errors were encountered: