You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two concerns regarding the implementation of the EarlyStopping mechanism in your project:
Adjustment of the delta Value After Reducing the Learning Rate: After the patience threshold is met and the learning rate is adjusted (reduced to one-tenth of its original value), the delta value for the EarlyStopping mechanism is changed from -0.001 to -0.002. Could you clarify the rationale behind making the delta value more stringent (-0.002) after adjusting the learning rate? This adjustment seems to require the model to exhibit a more significant improvement than before to avoid being considered as having "limited improvement". I think a smaller learning rate results in a more conservative increase in accuracy.
Potential Issue with Resetting self.score_max Upon EarlyStopping Re-instantiation: When executing early_stopping = EarlyStopping(patience=opt.earlystop_epoch, delta=-0.002, verbose=True) in train.py, the self.score_max variable is reset to -np.Inf. This reset could potentially lead to a scenario where the "best" weights saved after re-instantiating the EarlyStopping object might not actually be better than the weights saved before, considering that self.score_max no longer retains its previous value but is instead reset. Shouldn't self.score_max be preserved across re-instantiations to ensure that only genuinely better model states are saved? This behavior seems like it might be a bug, as it contradicts the purpose of tracking the best model performance across training epochs.
Looking forward to your insights on these points.
The text was updated successfully, but these errors were encountered:
I have two concerns regarding the implementation of the EarlyStopping mechanism in your project:
Adjustment of the
delta
Value After Reducing the Learning Rate: After the patience threshold is met and the learning rate is adjusted (reduced to one-tenth of its original value), thedelta
value for the EarlyStopping mechanism is changed from -0.001 to -0.002. Could you clarify the rationale behind making thedelta
value more stringent (-0.002) after adjusting the learning rate? This adjustment seems to require the model to exhibit a more significant improvement than before to avoid being considered as having "limited improvement". I think a smaller learning rate results in a more conservative increase in accuracy.Potential Issue with Resetting
self.score_max
Upon EarlyStopping Re-instantiation: When executingearly_stopping = EarlyStopping(patience=opt.earlystop_epoch, delta=-0.002, verbose=True)
in train.py, theself.score_max
variable is reset to-np.Inf
. This reset could potentially lead to a scenario where the "best" weights saved after re-instantiating the EarlyStopping object might not actually be better than the weights saved before, considering thatself.score_max
no longer retains its previous value but is instead reset. Shouldn'tself.score_max
be preserved across re-instantiations to ensure that only genuinely better model states are saved? This behavior seems like it might be a bug, as it contradicts the purpose of tracking the best model performance across training epochs.Looking forward to your insights on these points.
The text was updated successfully, but these errors were encountered: