This is the code used to execute the experiments and generate the charts reported in the RetiLeNet submission.
Note to the reader: Unfortunately, the final name was chosen after the whole study was completed using the
prototype name retinet
throughout the whole code. Changing it would have represented a major
bug source, and with the incoming deadline chose to keep the code as it was and correct only the final outputs. This
issue will, however, be addressed in the future.
This repository contains the following folders:
datasets
here the training and test set datasets are stored in the namesake directories:modules
which includes the custom models and auxiliary classes/methods used throughout the code. These are split into two sub-foldersmodels
containing the LeNet 5 (dfclenet.py
) and RetiLeNet (deepretinet.py
) implementationsutils
containing a training utility (trainer.py
) as well as a wrapper for hidden layer retreival (piper.py
)
plots
where all the generated plots are stored, separated in the two foldersaccuracy_sweeps
andbox_plots
results
including the .csv files generated byaccuracy_sweep.ipynb
trained_models
which contains the state dictionaries of the trained models
In addition to the folders, several scripts are also present, the usage of which is described in the following section.
The scripts are based on common python and pytorch libraries which can be installed with PIP by running
pip install torch torchvision tqdm matplotlib numpy pillow
Conda can be used as well but it is necessary to verify on which channel the repositories are located first.
After cloning the repo in your folder of choice
cd folder_of_choice
git clone https://github.com/janko-petkovic/retinet.git
you will find two types of script in the root folder
As the name suggests these are scripts used for training and comparing the final accuracies on modified datasets:
-
training.ipynb
: train a specific model (LeNet5, different types of RetiLeNet), on a specified dataset. The necessary parameters can be specified inside theinput panel
cell inside the notebook and the available options are provided as text comments therein. -
accuracy_sweep.ipynb
: once that you have trained your RetiLeNet of choice and a LeNet5, compare them on the desired dataset versus a$\mu$ and$\sigma$ sweep. Again, the necessary parameters can be specified inside theinput panel
cell in the notebook.
Being both of these notebooks GPU dependant, in case you found yourself in the lack of your favourite Titan RTX Ultimate, you can run these notebooks on Colab. In that case I warmheartedly suggest that you use this procedure:
- In your drive create a
Code
folder - Upload the entire
retinet
folder into theCode
folder and rename itRetiNet
After that run the modules, remembering to select the correct Mount drive, PATH
cell in the Prolegomena
section
Caveat: you can upload only the notebooks and reconstruct the folder tree and change the PATH cells (everything should be coded so that those are the PATH variables are only parameters to be modified) but I don't really recommend it. I am aware though that uploading the whole folder onto drive is far from best practice and I will try to optimize this procedure in future projects.
The remaining scripts generate the charts that are reported in the article submission, and save them in the relative plots
subfolder. In particular
plot_accuracy_sweep.py
generates the accuracy sweep plots starting from the data provided byaccuracy_sweep.ipynb
plot_boxes_bipolar.py
generates the box-violin charts for the input image and the first convolutional hidden output (corresponding to the bipolar cellular activations)plot_boxes_radiation.py
generates the box-violin charts for the input image and the hidden layer generated at the end of the pre-cortical module (corresponding to the activations of the optical radiation)plot_retina.py
generates and shows the precortical filters learned by the RetiLeNet model
The instructions for their usage are very straightforward and should be easily retreivable from the -h
option:
python SCRIPT_OF_CHOICE.py -h
Remember first to train the models and generate the data that you want to run the scripts for.