Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mru tests #38

Open
wants to merge 54 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
73b8b45
:poop: Added my implementation of multiresunet.py
gmagannaDevelop Oct 27, 2019
42c9158
Nothing really functional here. Debugging.
gmagannaDevelop Oct 27, 2019
d825e80
:bug: Remove an old local import in multiresunet2.py
gmagannaDevelop Oct 27, 2019
9a3a862
Merge remote-tracking branch 'refs/remotes/origin/mru_tests' into mru…
gmagannaDevelop Oct 27, 2019
7c834db
pip install persists across commits. Package should be uninstalled an…
gmagannaDevelop Oct 27, 2019
61e5047
All imports seem to work, making an intermediate commit because I wan…
gmagannaDevelop Oct 27, 2019
564331b
:alembic: Decorated train_segnet()
gmagannaDevelop Oct 27, 2019
eefa800
Model is training and logs are corretly generated.
gmagannaDevelop Oct 27, 2019
4f07b8f
:bug: Added log_path
gmagannaDevelop Oct 29, 2019
39cce4a
Modifying a lot of stuff. Logging is being improved.
gmagannaDevelop Oct 29, 2019
44be8c1
Refactoring the notebook.
gmagannaDevelop Oct 29, 2019
7f5e5fe
:alembic: Added **kw to train_segnet parameters
gmagannaDevelop Nov 4, 2019
26fca03
New file structure allows better performance, overall.
gmagannaDevelop Nov 4, 2019
a4dab7b
Comparative methodology implemented.
gmagannaDevelop Nov 4, 2019
8f35a4f
:truck: Moved notebooks to exampe_notebooks
gmagannaDevelop Nov 4, 2019
ef4eeec
~
gmagannaDevelop Nov 4, 2019
42363b1
:boom: Added original implementation
gmagannaDevelop Nov 4, 2019
b32acb0
:bug: Corrected errors in original implementation file.
gmagannaDevelop Nov 4, 2019
f7427bc
Poor perfomance of all three models on Dataset2 without data augmenta…
gmagannaDevelop Nov 5, 2019
40a157a
:alembic: New Metrics
gmagannaDevelop Jan 12, 2020
040f1f6
The last tests did not yield positive results. I will now focus on re…
gmagannaDevelop Jan 13, 2020
b864ee0
First replication attempt. Not yet functional, but this is the base s…
gmagannaDevelop Jan 13, 2020
fe5501c
:truck: Moved notebooks to example_notebooks/
gmagannaDevelop Jan 13, 2020
133d7c9
:truck: Replaced metrics.py <- metrics_abdiel.py
gmagannaDevelop Jan 20, 2020
db66a07
First MultiResUNet attempt at ISBI 2012 challenge, http://brainiac2.m…
gmagannaDevelop Jan 20, 2020
c5ae060
Class Segmed will allow easy interaction with the model.
gmagannaDevelop Jan 21, 2020
ca7704b
Class segmed is progressing. Method to generate train and validation …
gmagannaDevelop Jan 22, 2020
95c69a9
Class implementation now includes more descriptive names such as Segm…
gmagannaDevelop Jan 22, 2020
02f39fc
:heavy_plus_sign: Added needed modules to ez.py
gmagannaDevelop Jan 23, 2020
665a65b
Merge branch 'mru_tests' of github.com:gmagannaDevelop/segnet into mr…
gmagannaDevelop Jan 23, 2020
20504db
_Segmed__compile now is correctly decorated. To decorate a method, wi…
gmagannaDevelop Jan 23, 2020
217e011
:rocket: Updating ez.py
gmagannaDevelop Jan 25, 2020
3a3d220
Train method is now functional.
gmagannaDevelop Jan 25, 2020
53acb25
:alembic: Modified logging : timing.time_log
gmagannaDevelop Jan 25, 2020
93eb00e
:bulb: segnet/utils/timing.py
gmagannaDevelop Jan 27, 2020
64e7ded
Saving model history is now functional. A couple modifications to add…
gmagannaDevelop Jan 27, 2020
ef28415
Segmed class now allows comments.
gmagannaDevelop Jan 29, 2020
5040015
:children_crossing: segnet/utils/timing.py
gmagannaDevelop Jan 29, 2020
b472e52
Comment system is fully functional. Next commits will try to encapsul…
gmagannaDevelop Jan 29, 2020
d76a8af
Logging by directory functional, I perhaps ran out of GPU hours as ca…
gmagannaDevelop Jan 29, 2020
5f3ebc8
The class seems to be fully functional, but I've run out of GPU runti…
gmagannaDevelop Jan 29, 2020
e5c83fd
Added saving a model description (model.summary()) to the directory a…
gmagannaDevelop Jan 30, 2020
d102cb1
Added GPU logging system to Segmed Class.
gmagannaDevelop Jan 30, 2020
50ad954
:heavy_plus_sign: Added dependency
gmagannaDevelop Jan 30, 2020
b3e0bab
Merge branch 'mru_tests' of github.com:gmagannaDevelop/segnet into mr…
gmagannaDevelop Jan 30, 2020
713f25f
dummy commit before __init__ method transition.
gmagannaDevelop Jan 30, 2020
40bf4e1
:boom: Added utility class Segmed
gmagannaDevelop Jan 31, 2020
f59a785
Merge branch 'mru_tests' of github.com:gmagannaDevelop/segnet into mr…
gmagannaDevelop Jan 31, 2020
2fc89df
:children_crossing: Added example.
gmagannaDevelop Jan 31, 2020
b9ebeea
:chart_with_upwards_trend: Added verbose parameter.
gmagannaDevelop Feb 1, 2020
cd2a6fb
:bug: Added exception handling.
gmagannaDevelop Feb 1, 2020
c4c6c44
:art: Added nice examples using the new class.
gmagannaDevelop Feb 1, 2020
2c9b781
:bulb: Corrected a typo.
gmagannaDevelop Feb 1, 2020
c890f89
Preparing checks of reproducibility.
gmagannaDevelop Feb 3, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,286 changes: 1,286 additions & 0 deletions TruthTest_MultiResUNet.ipynb

Large diffs are not rendered by default.

1,862 changes: 1,862 additions & 0 deletions example_notebooks/Comparative_MultiResUNet.ipynb

Large diffs are not rendered by default.

1,880 changes: 1,880 additions & 0 deletions example_notebooks/Copie_de_Model_abstraction.ipynb

Large diffs are not rendered by default.

629 changes: 629 additions & 0 deletions example_notebooks/Final_tests.ipynb

Large diffs are not rendered by default.

1,893 changes: 1,893 additions & 0 deletions example_notebooks/Model_abstraction.ipynb

Large diffs are not rendered by default.

1,503 changes: 1,503 additions & 0 deletions example_notebooks/MultiResUNet.ipynb

Large diffs are not rendered by default.

1,364 changes: 1,364 additions & 0 deletions example_notebooks/TruthTest_MultiResUNet.ipynb

Large diffs are not rendered by default.

67 changes: 67 additions & 0 deletions example_notebooks/concise.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@

import tensorflow as tf

import segnet.metrics as mts

from segnet.models import unet
from segnet.models import multiresunet as mru
from segnet.models import multiresunet2 as mru2
from segnet.models import multiresunet3 as mru3
from segnet.utils.Segmed import Segmed

def main():

dataset_paths = {
"isbi": "/Users/gml/Google Drive/DCI-Net/Colab_data/ISBI_neural/structured",
"colonoscopy": "/Users/gml/Google Drive/DCI-Net/Colab_data/colonoscopy", # Full original
"dermoscopy80": "/Users/gml/Google Drive/DCI-Net/Colab_data/dermoscopy80", # reduced to 80 images
"dermoscopy150": "/Users/gml/Google Drive/DCI-Net/Colab_data/dermoscopy150", # reduced to 150,
"chinese1": "/Users/gml/Google Drive/DCI-Net/Colab_data/Dataset 2" # Chinese dataset
}

optimizers = {
"chinese": tf.keras.optimizers.SGD(learning_rate=0.06, momentum=0.2, nesterov=False),
"Original Adam": tf.keras.optimizers.Adam(beta_1=0.9, beta_2=0.999, epsilon=10e-8)
}

my_compiling_kw = {
'optimizer': optimizers["Original Adam"],
'loss': 'binary_crossentropy',
'metrics': [
mts.jaccard_index, mts.dice_coef,
mts.O_Rate, mts.U_Rate, mts.Err_rate
]
}

my_hyper_params = {
'batch_size': 25,
'epochs': 150,
'steps_per_epoch': 6
}

models = {
"Unet": unet(),
"MultiResUNet Edwin": mru.MultiResUnet(),
"MultiResUNet Gustavo": mru2.MultiResUNet(),
"MultiResUNet Original": mru3.MultiResUnet()
}

model = "Unet" # see models.keys()

x = Segmed(
model = models[model],
name = model,
base_dir = "/Users/gml/Google Drive/DCI-Net/SegMedLogs/LocalTests",
data_path = dataset_paths["dermoscopy80"],
author = "Gustavo Magaña"
)

x.train(
compiling_kw=my_compiling_kw,
hyper_params=my_hyper_params
)


if __name__ == "__main__":
main()

96 changes: 96 additions & 0 deletions example_notebooks/final_tests.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# -*- coding: utf-8 -*-
"""Final_tests.ipynb

Automatically generated by Colaboratory.

Original file is located at
https://colab.research.google.com/drive/1Tm1i32ADm2IDNZqFduqOhveSY93MZjKJ
"""

!ls drive

!apt install jq
!pip install git+https://github.com/gmagannaDevelop/segnet.git@mru_tests

import os

###############################################################
import tensorflow as tf

import segnet.metrics as mts

from segnet.models import unet
from segnet.models import multiresunet as mru
from segnet.models import multiresunet2 as mru2
from segnet.models import multiresunet3 as mru3
from segnet.utils.Segmed import Segmed

### Data-related
from google.colab import drive, files
drive.mount('/content/drive/')
###############################################################
### Not an import, but mandatory to be defined here :
root_dir = "drive/My Drive/Gus_Servicio_Profesional"
log_dir = os.path.join(root_dir, "Logs")

dataset_paths = {
"isbi": "drive/My Drive/Gus_Servicio_Profesional/Colab_data/ISBI_neural/structured",
"colonoscopy": "drive/My Drive/Gus_Servicio_Profesional/Colab_data/colonoscopy", # Full original
"dermoscopy80": "drive/My Drive/Gus_Servicio_Profesional/Colab_data/dermoscopy80", # reduced to 80 images
"dermoscopy150": "drive/My Drive/Gus_Servicio_Profesional/Colab_data/dermoscopy150", # reduced to 150,
"chinese1": "drive/My Drive/Gus_Servicio_Profesional/Colab_data/Dataset 2" # Chinese dataset
}

optimizers = {
"chinese": tf.keras.optimizers.SGD(learning_rate=0.06, momentum=0.2, nesterov=False),
"Original Adam": tf.keras.optimizers.Adam(beta_1=0.9, beta_2=0.999, epsilon=10e-8)
}

my_compiling_kw = {
'optimizer': optimizers["Original Adam"],
'loss': 'binary_crossentropy',
'metrics': [
mts.jaccard_index, mts.dice_coef,
mts.O_Rate, mts.U_Rate, mts.Err_rate
]
}

dataset = dataset_paths["dermoscopy80"]

len(os.listdir(os.path.join(dataset, "msks/masks")))

my_hyper_params = {
'batch_size': 20,
'epochs': 15,
'steps_per_epoch': 4
}

architectures = {
"Unet": unet(),
"MultiResUNet Edwin": mru.MultiResUnet(),
"MultiResUNet Gustavo": mru2.MultiResUNet(),
"MultiResUNet Original": mru3.MultiResUnet()
}

models = {
key: Segmed(
model = architectures[key],
name = key,
base_dir = log_dir,
data_path = dataset_paths["dermoscopy80"],
author = "Gustavo Magaña"
)
for key in architectures.keys()
}

models

for model in models.values():
model.comment(" Retrying using GPU accelerator, few epochs and the smallest dataset that I have. ")

for model in models.values():
model.train(
compiling_kw = my_compiling_kw,
hyper_params = my_hyper_params
)

1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -52,3 +52,4 @@ wcwidth
werkzeug
wrapt
zipp
gputil
47 changes: 47 additions & 0 deletions segnet/metrics/metrics.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,52 @@
import tensorflow as tf

def O_Rate(y_true,y_pred):
"""
Abdiel's metric, no description
"""
num_data = tf.cast(tf.shape(y_true), dtype=tf.float32)
y_t=tf.reshape(y_true,shape=[-1])
y_p=tf.reshape(y_pred,shape=[-1])
uno=tf.constant(1.0,dtype=tf.float32)
y_true_b=tf.round(y_t-0.1)
y_pred_b=tf.round(y_p-0.1)
Dp=tf.reduce_sum(y_true_b*y_pred_b)
Qp=tf.reduce_sum(y_true_b*(uno-y_pred_b))
Up=tf.reduce_sum(y_pred_b*(uno-y_true_b))

return (Qp/(Up+Dp+0.0001))/num_data[0]

#Estas primeras usan menos parametros pero ambas son equivalentes en tabla de verdad y en resultados
def U_Rate(y_true,y_pred):
"""
"""
num_data = tf.cast(tf.shape(y_true), dtype=tf.float32)
y_t=tf.reshape(y_true,shape=[-1])
y_p=tf.reshape(y_pred,shape=[-1])
uno=tf.constant(1.0,dtype=tf.float32)
y_true_b=tf.round(y_t-0.1)
y_pred_b=tf.round(y_p-0.1)
Dp=tf.reduce_sum(y_true_b*y_pred_b)
Qp=tf.reduce_sum(y_true_b*(uno-y_pred_b))
Up=tf.reduce_sum(y_pred_b*(uno-y_true_b))

return (Up/(Up+Dp+0.0001))/num_data[0]

def Err_rate(y_true,y_pred):
"""
"""
num_data = tf.cast(tf.shape(y_true), dtype=tf.float32)
y_t=tf.reshape(y_true,shape=[-1])
y_p=tf.reshape(y_pred,shape=[-1])
uno=tf.constant(1.0,dtype=tf.float32)
y_true_b=tf.round(y_t-0.1)
y_pred_b=tf.round(y_p-0.1)
Dp=tf.reduce_sum(y_true_b*y_pred_b)
Qp=tf.reduce_sum(y_true_b*(uno-y_pred_b))
Up=tf.reduce_sum(y_pred_b*(uno-y_true_b))

return ((Qp+Up)/(Dp+0.0001))/num_data[0]


def jaccard_index(y_true, y_pred):
"""
Expand Down
Loading