We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
全连接网络中的损失函数为什么是用tf.matmul(hidden, fc2_weight) + fc2_biases, 和 label比较呢?而不是用softmax, argmax后的比较呢?
The text was updated successfully, but these errors were encountered:
你可以将代码哪一行提一下吗?都一年了,我有些忘记了。
Sorry, something went wrong.
`【model的返回值 return tf.matmul(hidden, fc2_weight) + fc2_biases】
logits = model(self.tf_train_samples) with tf.name_scope('loss'): self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, self.tf_train_labels)) self.loss += self.apply_regularization(_lambda=5e-4)` Label 集是个one-hot集合,里面的值为1可以理解为这个图片为该值的概率为1,可是计算loss函数时,model 的输出直接与label 计算,为啥不添加一个softmax层转化成概率呢再比较呢?
函数名应该表明了用途tf.nn.softmax_cross_entropy_with_logits(logits, self.tf_train_labels)
tf.nn.softmax_cross_entropy_with_logits(logits, self.tf_train_labels)
No branches or pull requests
全连接网络中的损失函数为什么是用tf.matmul(hidden, fc2_weight) + fc2_biases, 和 label比较呢?而不是用softmax, argmax后的比较呢?
The text was updated successfully, but these errors were encountered: