-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot replicate result of Haar transform #29
Comments
same problem here, have you solve it?, i have try to use pywavelets impletementation, the result is very similar |
Unfortunately no, and my project that was using this is given to other people now. What I did was to completely ignore this issue and use this result to run the rest of the code. The overall result is still satisfactory (at least it's not like 50% accuracy), but there's some discrepencies between what I did and what this paper did (namely, I used synthesized Moire image to train instead of real Moire image). FYI. I am thinking that the reason might stem from resolution / zoom rate / size issue. Moire pattern is very sensitive to those and, sometimes, resizing an image or just simply zoom in / zoom out in a photo editor can yield very obvious and visible Moire pattern in a very specific zoom rate. Maybe the pattern is still in the image and the model can still extract this property, but it's just happens that we can't see with bare eyes. Or the paper simply resize / zoom in/out the matplotlib screenshot and the Moire pattern became very obvious. I didn't try and I won't try either. Maybe you can test it out. You can also experiment with resizing your input image to different (but fixed) size, or use different library's resizing function. I tried OpenCV and PIL's resize function (with same resizing method) and sometimes they yield drastically different result. Namely, for my final model, for the same images using different function but same resizing method (e.g. bilinear), they yield very different inference accuracy. Don't know why but it happened. That is almost all the suggestion I can give about Moire image. |
(BTW I use pywavelet in the end. I don't remember whether it is faster or not but it surely has nicer function interface.) |
In the playground.ipynb notebook, I cannot replicate the first block of code as good as the original notebook image.
This is my output. As you can see there is not much Moire pattern in here. Especially the HL part
I don't see any changes to this file or haar2D.py on GitHub's history, and you implemented the transform by numpy so I don't think it's a version issue. Is there any explanation to how this is happening?
The text was updated successfully, but these errors were encountered: