Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The issue about the batch_size and training step #25

Open
FFppran opened this issue Apr 10, 2018 · 1 comment
Open

The issue about the batch_size and training step #25

FFppran opened this issue Apr 10, 2018 · 1 comment

Comments

@FFppran
Copy link

FFppran commented Apr 10, 2018

My GPU memory only have 4G(gtx 1050ti). if I set batch_size=32,it will prompt Memory Error.So,do I need to increase the number of training steps to supplement the lack of training samples?

@LevinJ
Copy link
Owner

LevinJ commented Apr 11, 2018

Hi @FFppran , I think you might try training with current iteration numbers configuration and see how it goes, it's possible that the chage of batch_size has no effect on the final result, as the training is already fully done (both train and test error pleateaued in currentlearning chart)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants