Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concerns Regarding the Fairness of SRe2L Comparison #9

Open
Jiacheng8 opened this issue Mar 5, 2025 · 0 comments
Open

Concerns Regarding the Fairness of SRe2L Comparison #9

Jiacheng8 opened this issue Mar 5, 2025 · 0 comments

Comments

@Jiacheng8
Copy link

Thank you for your excellent work! I truly appreciate the effort put into this study. However, I have some concerns regarding the fairness of the comparison for SRe2L.

As highlighted in the recent work "Dataset Distillation via Committee Voting" (https://arxiv.org/abs/2501.07575), the hyper-parameters and the recovery process for small datasets like CIFAR-10 and CIFAR-100 were found to be highly suboptimal, leading to a significant performance gap (whether using hard labels or soft labels). Given this, I believe it is crucial to re-evaluate the leaderboard using the updated version of SRe2L++ to ensure a fair comparison.

Additionally, the current dataset benchmark appears to be relatively simple. To better assess the performance of different methods, it would be beneficial to include large-scale datasets such as ImageNet-1k and its subsets. This would provide a more comprehensive evaluation of each approach.

I appreciate your time and consideration, and I look forward to your thoughts on this matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant