We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I'm reading the paper and curious about your implementation.
CReLU layer seems defined but not used. Instead, the code implements it again in the layer construction.
Also, the paper has a batch norm layer but it's not implemented.
What is the consideration for this implementation? Better performance?
Thanks
The text was updated successfully, but these errors were encountered:
Whichever way is ok about CReLU . With the bn layer, faceboxes should have better results, I just forget to add the batch norm layer, thanks
Sorry, something went wrong.
@yulizhou @lxg2015 Hi, I fix network design in this repo, little changes made better performance, such as conv to conv_bn_relu~
@xiongzihua hi, I think the predict code is wrong.We can't resize the original image. See this repo , the output is closer to the paper.
No branches or pull requests
Hi, I'm reading the paper and curious about your implementation.
CReLU layer seems defined but not used. Instead, the code implements it again in the layer construction.
Also, the paper has a batch norm layer but it's not implemented.
What is the consideration for this implementation? Better performance?
Thanks
The text was updated successfully, but these errors were encountered: