You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently running N2V in Jupyter Notebook.
In the past, I would run the training module, and restart the kernel to free GPU memory so that I could run the predict module.
Now, I was wondering if I can run both modules simultaneously without having to restart the kernel.
Is there something that terminates TF to conserve GPU memory in the code? something like 'tf.keras.backend.clear_session()'?
I also tried to run the training module in multiprocess, hoping that the session would end and the memory would be freed, but I had to close Jupyter Notebook entirely to free up that GPU in multiprocess.
Any advice would be greatly appreciated!
Thank you.
The text was updated successfully, but these errors were encountered:
In my experience running training and predicition sequentially in the same script/notebook works.
The problem of running first the training notebook and then the prediction notebook is, that two kernels are started. But if you have training and prediction running on the same kernel, the GPU memory will be allocated by the same kernel and can be reused.
Hi,
I am currently running N2V in Jupyter Notebook.
In the past, I would run the training module, and restart the kernel to free GPU memory so that I could run the predict module.
Now, I was wondering if I can run both modules simultaneously without having to restart the kernel.
Is there something that terminates TF to conserve GPU memory in the code? something like 'tf.keras.backend.clear_session()'?
I also tried to run the training module in multiprocess, hoping that the session would end and the memory would be freed, but I had to close Jupyter Notebook entirely to free up that GPU in multiprocess.
Any advice would be greatly appreciated!
Thank you.
The text was updated successfully, but these errors were encountered: