From 91cde89374d6a9854b9a03213208c420ea3a38fb Mon Sep 17 00:00:00 2001 From: tibuch Date: Wed, 24 Jun 2020 09:11:59 +0200 Subject: [PATCH] Improve jupyter notebook documentation, see Issue #66. --- examples/2D/denoising2D_BSD68/BSD68_reproducibility.ipynb | 4 +++- examples/2D/denoising2D_RGB/01_training.ipynb | 3 ++- examples/2D/denoising2D_SEM/01_training.ipynb | 3 ++- examples/2D/structN2V_2D_convallaria/01_training.ipynb | 3 ++- examples/3D/01_training.ipynb | 3 ++- 5 files changed, 11 insertions(+), 5 deletions(-) diff --git a/examples/2D/denoising2D_BSD68/BSD68_reproducibility.ipynb b/examples/2D/denoising2D_BSD68/BSD68_reproducibility.ipynb index 8528bae..3e97d02 100644 --- a/examples/2D/denoising2D_BSD68/BSD68_reproducibility.ipynb +++ b/examples/2D/denoising2D_BSD68/BSD68_reproducibility.ipynb @@ -279,7 +279,9 @@ ], "source": [ "# We are ready to start training now.\n", - "history = model.train(X, X_val, 10, 20)" + "history = model.train(X, X_val, 10, 20)\n", + "# Run the line below for the full long training! This will take a couple hours. \n", + "# history = model.train(X, X_val)" ] }, { diff --git a/examples/2D/denoising2D_RGB/01_training.ipynb b/examples/2D/denoising2D_RGB/01_training.ipynb index 8fac136..bee9c95 100644 --- a/examples/2D/denoising2D_RGB/01_training.ipynb +++ b/examples/2D/denoising2D_RGB/01_training.ipynb @@ -238,6 +238,7 @@ "\n", "For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64). \n", "\n", + "### Multi-Channel Data\n", "In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough.
\n", "__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another.
\n", "But for RGB images we can turn this option off. " @@ -247,7 +248,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Warning: to make this example notebook execute faster, we have set train_epochs to only 25.
For better results we suggest 100 to 200 train_epochs." + "__Warning:__ to make this example notebook execute faster, we have set train_epochs to only 25.
For better results we suggest 100 to 200 train_epochs.
" ] }, { diff --git a/examples/2D/denoising2D_SEM/01_training.ipynb b/examples/2D/denoising2D_SEM/01_training.ipynb index c99cad3..06f0367 100644 --- a/examples/2D/denoising2D_SEM/01_training.ipynb +++ b/examples/2D/denoising2D_SEM/01_training.ipynb @@ -247,6 +247,7 @@ "\n", "For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64). \n", "\n", + "### Multi-Channel Data\n", "In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough.
\n", "__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another." ] @@ -255,7 +256,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Warning: to make this example notebook execute faster, we have set train_epochs to only 10.
For better results we suggest 100 to 200 train_epochs." + "__Warning:__ to make this example notebook execute faster, we have set train_epochs to only 10.
For better results we suggest 100 to 200 train_epochs.
" ] }, { diff --git a/examples/2D/structN2V_2D_convallaria/01_training.ipynb b/examples/2D/structN2V_2D_convallaria/01_training.ipynb index 6527427..bb6733e 100644 --- a/examples/2D/structN2V_2D_convallaria/01_training.ipynb +++ b/examples/2D/structN2V_2D_convallaria/01_training.ipynb @@ -418,6 +418,7 @@ "\n", "For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64). \n", "\n", + "### Multi-Channel Data\n", "In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough.
\n", "__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another." ] @@ -426,7 +427,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Warning: to make this example notebook execute faster, we have set train_epochs to only 10.
For better results we suggest 100 to 200 train_epochs." + "__Warning:__ to make this example notebook execute faster, we have set train_epochs to only 10.
For better results we suggest 100 to 200 train_epochs.
" ] }, { diff --git a/examples/3D/01_training.ipynb b/examples/3D/01_training.ipynb index 111ecd6..59c5ace 100644 --- a/examples/3D/01_training.ipynb +++ b/examples/3D/01_training.ipynb @@ -228,6 +228,7 @@ "\n", "For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64), but since this is an 3D example we obviously need to specify a triple, here (32, 64, 64). \n", "\n", + "### Multi-Channel Data\n", "In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough.
\n", "__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another." ] @@ -236,7 +237,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Warning: to make this example notebook execute faster, we have set train_epochs to only 20.
For better results we suggest 100 to 200 train_epochs." + "__Warning:__ to make this example notebook execute faster, we have set train_epochs to only 20.
For better results we suggest 100 to 200 train_epochs. Especially with 3D data longer training is necessary to obtain good results.
" ] }, {