-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example_video_and_image_colorization #25
Comments
Hey @Jonathhhan, great work. I have adjusted a few things and wanted to push the changes soon. However, i have noticed that videos didnt look that good so i digged into the python code for inference. I saw that the authors are dividing by 256 then converting from RGB 2 LAB and finally taking the first channel as an input to the model. You are doing a division by 2.55 on the first channel. Could you please elaborate on that? Thanks :) |
@bytosaur thanks for the hint. I will have a look into that. |
@bytosaur I made a version that converts to lab color space (and some other improvements): https://github.com/Jonathhhan/ofxTensorFlow2/blob/example_video_and_image_colorization/example_video_and_image_colorization_2/src/ofApp.cpp |
this example did not make to #29 since the results weren't that useful. Anyway, thanks a lot for the contribution. |
@bytosaur no problem, i already know it. just out of interest: is the colorization itself a boring usecase, or is the result not good enough (it was trained with hitchcock movies, if i am right, and especially coloring sky and water works really bad - coloring skin and trees for example much better)? thanks for improving and including the other 2 examples (i would love to see more examples from other users, for a better understanding of how to implement different networks with |
hey @Jonathhhan, Still I want to keep the thread open at least for a while. |
Apropos the Yolo example, we will put a related project on Github soon: YoloOSC
enohp ym morf tnes
-----------
Dan Wilcox
danomatika.com
robotcowboy.com
… On Sep 14, 2022, at 5:09 PM, paul ***@***.***> wrote:
hey @Jonathhhan,
no i actually think the use case is OK, just the quality for video was poorly. But yeah, in my eyes OpenFrameworks is useful for realtime applications, where image colorization may not be very interesting.
However, Yolov4 is quite cool and we are already using it for fun projects :)
Still I want to keep the thread open at least for a while.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
|
This was not easy for me (to find and understand a pretrained colorization model).
With this pretrained model I got it working (had to convert it to a saved model):
https://github.com/EnigmAI/Artistia/blob/main/ImageColorization/trained_models_v2/U-Net-epoch-100-loss-0.006095.hdf5
It tried it with this model before, but it seems they use two models together (do not really understand it yet): https://github.com/pvitoria/ChromaGAN
I converted the model like this (with Python):
Anyway, here is the example: https://github.com/Jonathhhan/ofxTensorFlow2/tree/example_video_and_image_colorization/example_video_and_image_colorization
The text was updated successfully, but these errors were encountered: