You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to decrease generation time. So far I am getting 2 min and 20 seconds per image (generating 10 images as output for age).
What I am realizing is that encode_images.py is taking this long for each input image:
Initializing generator : 7.2106 secs
Creating PerceptualModel : 9.0305 secs
Loading Resnet model : 23.0473 secs
Loop loss : 1.0582 secs
Loop loss : 0.0619 secs
Loop loss : 0.0630 secs
Loop loss : 0.0618 secs
Loop loss : 0.0628 secs
Loop loss : 0.0621 secs
So I am trying to initialize the generator, create the perceptual model and load the resnet model at once at the beginning of my script and pass as parameters to encode_images.py so steps 1 to 3 are not being done for each image.
But I have no idea if that's the right way to do it. I defined an auxiliar() function instead of calling the script directly and passing same flags and parameters:
So far I am getting this error: ValueError: Tensor(“Const_1:0”, shape=(3,), dtype=float32) must be from the same graph as Tensor(“strided_slice:0", shape=(1, 256, 256, 3), dtype=float32).
At this point of the code that used to be in encode_images.py:
Hi, I am trying to decrease generation time. So far I am getting 2 min and 20 seconds per image (generating 10 images as output for age).
What I am realizing is that encode_images.py is taking this long for each input image:
So I am trying to initialize the generator, create the perceptual model and load the resnet model at once at the beginning of my script and pass as parameters to encode_images.py so steps 1 to 3 are not being done for each image.
But I have no idea if that's the right way to do it. I defined an auxiliar() function instead of calling the script directly and passing same flags and parameters:
New defined function
Former script call
So far I am getting this error:
ValueError: Tensor(“Const_1:0”, shape=(3,), dtype=float32) must be from the same graph as Tensor(“strided_slice:0", shape=(1, 256, 256, 3), dtype=float32).
At this point of the code that used to be in encode_images.py:
The text was updated successfully, but these errors were encountered: