-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to reproduce reported performance on AAPM16 dataset #3
Comments
Hi, Thank you for your attention. I agree that the impact of hyperparameters should generally be minimal, usually less than 0.5db for different settings if the training is stable. It sounds like there might have been a step missed in the process. Specifically, as described in the article, "we finetune each model for another ten epochs on AAPM dataset to bridge the domain gap following the same setting." This is because we find AAPM dataset is relatively small and many networks require more data for effective training. Please let me know if this step was overlooked in your procedure. Best regards. |
Thank you so much for the quick and helpful clarification—this makes a lot of sense! To confirm: The results in Table 2 are based on a pre-trained model (DeepLesion) → finetuned on AAPM16 for 10 epochs, whereas our current implementation trained directly on AAPM16 from scratch (without pre-training). This explains the performance gap we observed! Just to ensure we’re aligned:
Thanks again for your patience and guidance—it’s been a huge help! |
Hi, you are welcome. I think we are aligned.
BTW, personally, I haven't caught up with this area for some time, but I do have some findings that may benefit your study:
Best regards. |
Dear authors, thank you so much for your fantastic work and for open-sourcing the code.
I’m reaching out because I’m having some trouble reproducing the 18-view AAPM16 results from Table 2 (reported PSNR/SSIM: 37.91/0.9458). After following the code closely, my best results so far are 34.80/0.9165 by following your code.
My modification lies in changing the batch size to 16 and only train using one NVIDIA 3090 GPU card.
Could you share any tips for aligning closer to the paper’s performance? For example:
The text was updated successfully, but these errors were encountered: