You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the backbone is small (e.g., DLinear), we jointly train LIFT and a randomly initialized backbone.
As for PatchTST, Crossformer, and MTGNN, most results are based on a frozen backbone. Empirically, there is little difference between the two training schemes. We recommend building LIFT on a pretrained and frozen backbone so as to save time costs.
Tranks fro your work.
Does the LIFT result in the paper come from a model where the backbone is frozen?
The text was updated successfully, but these errors were encountered: