This model will perform a grid search over 40 different learning rates and allows us to select the best LR for our target test accuracy. We have used different stopping conditions to save on the GPU time.
git clone https://github.com/Octavian-ai/learning-rates-cifar10
virtualenv -p python3 envname
source envname/bin/activate
pip3 install numpy tensorflow
cd learning-rates-cifar10
python3 train-local.py
Training is quite slow without a GPU. It is easy to run on FloydHub. Instructions are given below.
sudo pip install -U floyd-cli
floyd login
cd learning-rates-cifar10
floyd init learning-rates-cifar10
floyd run --gpu --env tensorflow-1.8 --data signapoop/datasets/cifar-10/1:/data_set 'python train-floyd.py'
I would like to thank David Mack and Andrew Jefferson for their support on this task.
The experimental setup of choosing best learning rate has been derived from David Mack's article.
The model for the experiment is used from Serhiy Mytrovtsiy's work available on GitHub.