You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As written in common/data.py in load_dataset the ratio of train_set : test_set is 80 : 20,
but as we randomly generate positive and negative query-target pair,
(balance case) we are getting 4096 graphs, for both training set and testing set,
(imbalance case) we are getting 2048 graphs for both training set and testing set,
So won't that be an issue as the size of train_set is same as test_set?
The text was updated successfully, but these errors were encountered:
Can't confirm right now but I believe there is an option --val_size to change the validation set size. Agree val_size should be increased during a rigorous evaluation
On Oct 27, 2020, at 05:23, Abhishek Rajgaria ***@***.***> wrote:
As written in common/data.py in load_dataset the ratio of train_set : test_set is 80 : 20,
but as we randomly generate positive and negative query-target pair,
(balance case) we are getting 4096 graphs, for both training set and testing set,
(imbalance case) we are getting 2048 graphs for both training set and testing set,
So won't that be an issue as the size of train_set is same as test_set?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
As written in common/data.py in load_dataset the ratio of train_set : test_set is 80 : 20,
but as we randomly generate positive and negative query-target pair,
(balance case) we are getting 4096 graphs, for both training set and testing set,
(imbalance case) we are getting 2048 graphs for both training set and testing set,
So won't that be an issue as the size of train_set is same as test_set?
The text was updated successfully, but these errors were encountered: