Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

第31行的onehotlabel没有必要再乘一次吧。原则上来说这个只要结果乘一次就好啦 #1

Open
hanghang2333 opened this issue Jan 5, 2018 · 3 comments

Comments

@hanghang2333
Copy link

No description provided.

@hushunda
Copy link

的确没必要,这个程序写的啰说了,还没有softmax

@alyato
Copy link

alyato commented Jun 6, 2018

hi,我用keras,backend 用TensorFlow。
请问,我用这个code,要的是下面的这个么

func_fl = focal_loss(labels, model_out, fl_gamma, fl_alpha)

还是这一行

loss = tf.reduce_mean(func_fl)

谢谢。

@lilmangolil
Copy link

您好,想请问一下分类所用损失函数是tf.nn.sparse_softmax_cross_entropy_with_logits,可否直接将这个函数换做focal-loss,其中的label和logits直接用呢?谢谢~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants