Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to find where tu put 2 images in order to generate the middle frame #1

Open
adrienpitz opened this issue Mar 2, 2018 · 2 comments

Comments

@adrienpitz
Copy link

Hi,

I have a problem. I trained the model on the Stefan dataset and now the model has been saved.

I want to test the method for fast moving tennis images in order to produce a smooth slow-mo video. Given two images taken from MY input video (let's say img01.jpg and img03.jpg). Where (or how) can I give those to images to the trained model to generate the interpolated frame img02.jpg (or the structure of img02) ?

Can't find the entry of the program !

Thanks a lot,
Adrien

@apoorva-sharma
Copy link
Owner

Hi Adrien,

The code was written to train and test on the same y4m video, and so testing it on frames from another video might be a bit of work. The images go into the model as an np array - take a look at the datasets.py file to see how it's structured. You could create an array of the same shape with your images, and then call the model using sess.run(), using a process similar to that in the test() method of the Finn class in model.py.

Hope that helps,
Apoorva

@adrienpitz
Copy link
Author

Hi ! Thanks for the reply

I took a video that interests me :

  • 1920 x 1080
  • 25fps
  • y4m

I tried to train/test the network with this video, however it sets me an error

Traceback (most recent call last):
  File "main.py", line 50, in <module>
    finn.build_model()
  File "/mnt/ahl01/GAN/finn/model.py", line 110, in build_model
    self.G = self.generator(self.doublets)
  File "/mnt/ahl01/GAN/finn/model.py", line 88, in generator
    stack = tf.concat([result, rev_conv_outputs[i+1]], 3)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 1099, in concat
    return gen_array_ops._concat_v2(values=values, axis=axis, name=name)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 706, in _concat_v2
    "ConcatV2", values=values, axis=axis, name=name)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op
    set_shapes_for_outputs(ret)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
    require_shape_fn)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Dimension 1 in both shapes must be equal, but are 136 and 135 for 'generator/concat' (op: 'ConcatV2') with input shapes: [1,136,240,64], [1,135,240,64], [] and with computed input tensors: input[2] = <3>.

Is it therefore possible to train the model with videos with a larger format than 352x288 ?

I will test the solution you gave me either for generating the non-existent image between two given images

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants