Skip to content

Commit

Permalink
Improve jupyter notebook documentation, see Issue #66.
Browse files Browse the repository at this point in the history
  • Loading branch information
tibuch committed Jun 24, 2020
1 parent 5088a94 commit 91cde89
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 5 deletions.
4 changes: 3 additions & 1 deletion examples/2D/denoising2D_BSD68/BSD68_reproducibility.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,9 @@
],
"source": [
"# We are ready to start training now.\n",
"history = model.train(X, X_val, 10, 20)"
"history = model.train(X, X_val, 10, 20)\n",
"# Run the line below for the full long training! This will take a couple hours. \n",
"# history = model.train(X, X_val)"
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion examples/2D/denoising2D_RGB/01_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,7 @@
"\n",
"For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size <code>n2v_patch_shape</code> are extracted during training. Default patch shape is set to (64, 64). \n",
"\n",
"### Multi-Channel Data\n",
"In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. <br/>\n",
"__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another.<br/>\n",
"But for RGB images we can turn this option off. "
Expand All @@ -247,7 +248,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Warning:</font> to make this example notebook execute faster, we have set <code>train_epochs</code> to only 25. <br>For better results we suggest 100 to 200 <code>train_epochs</code>."
"<font color='red' size=\"4\">__Warning:__ to make this example notebook execute faster, we have set <code>train_epochs</code> to only 25. <br>For better results we suggest 100 to 200 <code>train_epochs</code>.</font>"
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion examples/2D/denoising2D_SEM/01_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,7 @@
"\n",
"For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size <code>n2v_patch_shape</code> are extracted during training. Default patch shape is set to (64, 64). \n",
"\n",
"### Multi-Channel Data\n",
"In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. <br/>\n",
"__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another."
]
Expand All @@ -255,7 +256,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Warning:</font> to make this example notebook execute faster, we have set <code>train_epochs</code> to only 10. <br>For better results we suggest 100 to 200 <code>train_epochs</code>."
"<font color='red' size=\"4\">__Warning:__ to make this example notebook execute faster, we have set <code>train_epochs</code> to only 10. <br>For better results we suggest 100 to 200 <code>train_epochs</code>.</font>"
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion examples/2D/structN2V_2D_convallaria/01_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -418,6 +418,7 @@
"\n",
"For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size <code>n2v_patch_shape</code> are extracted during training. Default patch shape is set to (64, 64). \n",
"\n",
"### Multi-Channel Data\n",
"In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. <br/>\n",
"__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another."
]
Expand All @@ -426,7 +427,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Warning:</font> to make this example notebook execute faster, we have set <code>train_epochs</code> to only 10. <br>For better results we suggest 100 to 200 <code>train_epochs</code>."
"<font color='red' size=\"4\">__Warning:__ to make this example notebook execute faster, we have set <code>train_epochs</code> to only 10. <br>For better results we suggest 100 to 200 <code>train_epochs</code>.</font>"
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion examples/3D/01_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,7 @@
"\n",
"For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size <code>n2v_patch_shape</code> are extracted during training. Default patch shape is set to (64, 64), but since this is an 3D example we obviously need to specify a triple, here (32, 64, 64). \n",
"\n",
"### Multi-Channel Data\n",
"In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. <br/>\n",
"__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another."
]
Expand All @@ -236,7 +237,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Warning:</font> to make this example notebook execute faster, we have set <code>train_epochs</code> to only 20. <br>For better results we suggest 100 to 200 <code>train_epochs</code>."
"<font color='red' size=\"4\">__Warning:__ to make this example notebook execute faster, we have set <code>train_epochs</code> to only 20. <br>For better results we suggest 100 to 200 <code>train_epochs</code>. Especially with 3D data longer training is necessary to obtain good results.</font>"
]
},
{
Expand Down

0 comments on commit 91cde89

Please sign in to comment.