Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the evaluation. #6

Open
nnnnnzy opened this issue Jul 16, 2022 · 0 comments
Open

Questions about the evaluation. #6

nnnnnzy opened this issue Jul 16, 2022 · 0 comments

Comments

@nnnnnzy
Copy link

nnnnnzy commented Jul 16, 2022

What is the purpose of testing and recording accuracy every 100 rounds during training? Isn't the pre-training process an unsupervised process? According to the DGI paper and code implementation, DGI only has a gradient descent in the training process until the loss stops getting smaller (early stopping) and then the training ends.

I don't think the value of infonce loss is necessarily inversely proportional to the result of linear evaluation, but controlling the training of graph contrastive learning according to the result of linear evaluation is fitting the dataset. So, when should the training of graph contrastive learning end? And how to compare the different graph contrastive learning methods fairly?

Also the final accuracy calculation in the code implementation only calculates the accuracy of one random split of the dataset, and does not calculate the accuracy of multiple splits (or multiple run of logistic regression like DGI) and calculate the final average acc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant