Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KL loss in validation #23

Open
lizekai-richard opened this issue Sep 30, 2024 · 4 comments
Open

KL loss in validation #23

lizekai-richard opened this issue Sep 30, 2024 · 4 comments

Comments

@lizekai-richard
Copy link

Hi, may I ask why the KL loss is used during validation? This doesn't match equation 9 in the paper which is a cross-entropy loss.

@Jiacheng8
Copy link

Hi, I think it's basically the same, you can think this KL-divergence as a cross-entropy function to multiple soft labels.

@lizekai-richard
Copy link
Author

lizekai-richard commented Nov 20, 2024

@Jiacheng8
Are there any ablation results? How does using KL-loss compare with using cross-entropy loss?

@Jiacheng8
Copy link

The two are equivalent except for a constant term, but of course the optimisation will be a little different for the lowest value, I will try to do an ablation result on this.

@lizekai-richard
Copy link
Author

@Jiacheng8 Thanks. My major concern is that using kl-loss results in stronger knowledge distillation enhancement, especially when temperature is also used in your case. I'm wondering if adopting this evaluation strategy is the main reason for performance improvement instead of the dataset itself having a higher quality. Yet, the paper didn't present ablation on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants