You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an amazing job. Therefore, I want to apply it to other dataset. I see that you use hyper parameter "skip-large". However, though my data just has 3 databases, the column number of each database is large. So can I continue to use this model? When I preprocessed my own dataset using process_graphs.py, it took too much time. Will my large database also lead to slow training?
The text was updated successfully, but these errors were encountered:
This is an amazing job. Therefore, I want to apply it to other dataset. I see that you use hyper parameter "skip-large". However, though my data just has 3 databases, the column number of each database is large. So can I continue to use this model? When I preprocessed my own dataset using process_graphs.py, it took too much time. Will my large database also lead to slow training?
The text was updated successfully, but these errors were encountered: