You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper, the author said "... , which the cross-entropy loss L^attr is applied for pedestrain attribute recognition." However, in your code, the loss function is BCEWithLogitsLoss. Could you give the reason about it?
The text was updated successfully, but these errors were encountered:
Sorry, I asked a stupid question. The BCEWithLogitsLoss in Pytorch is according with the SigmoidCrossEntropyLoss in Caffe. But I have another question. Why are the weights in your paper those values? How do you figure out the weights?
The weight I calculated is as follows:
[0.63371877 0.98601126 0.39297374 0.9494238 0.69839181 0.75289839
0.69963256 0.95700181 0.83599094 0.83235188 0.82602933 0.86278694
0.99040879 0.55656295 0.66098443 0.95031431 0.85723987 0.89706675
0.95695396 0.99496273 0.98239432 0.97219391 0.49214224 0.84040194
0.88946288 0.9938316 ]
My calculation is as follows: pa_100k = sio.loadmat(matfile) positive = np.zeros(26) for label in pa_100k['train_label']: positive = positive + label positive = positive / len(train_labels) weight = np.exp(-positive) print(weight)
In the paper, the author said "... , which the cross-entropy loss L^attr is applied for pedestrain attribute recognition." However, in your code, the loss function is BCEWithLogitsLoss. Could you give the reason about it?
The text was updated successfully, but these errors were encountered: