Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

effects of activation functions? #4

Open
pswpswpsw opened this issue Jun 26, 2018 · 1 comment
Open

effects of activation functions? #4

pswpswpsw opened this issue Jun 26, 2018 · 1 comment

Comments

@pswpswpsw
Copy link

It seems that no one really talk about the effect of activation functions.

Personally, I found Relu gives what the most human-favorable uncertainty result while tanh has too much confidence in the prediction that it does not see.

Similar observation was found by Gal in his paper for dropout as bayesian.

Any thoughts for bayesbyhypernet?

@pawni
Copy link
Owner

pawni commented Jul 30, 2018

Hi,

Sorry for not replying earlier - somehow I didn't get a notification about that. I have not looked into this but I agree that this is an interesting question to ask :)

I will look into it as soon as I have some time for that - feel free to try it yourself with the code here and share your results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants