You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to potential models that will be benchmarked, and the way we will benchmark them, architecture should be reviewed. We must define if it makes sense to create a new interface for benchmarked models, or if we use interface/class already defined in ontime (AbstractModel/Model, see https://github.com/ontime-re/ontime/tree/develop/src/ontime/core/modelling)
For the benchmark models, we need :
an evaluation method, to be performed in rolling window mode
a basic fit method, with training managed by the trained model
a predict method, which can predict on a new series as well as on the one the model has been trained on
a flag to determine the mode of evaluation (zero-shot, full-shot, few-shot)
the possibility to load a checkpoint, particularly for foundation models
univariate model should be able to perform multi-univariate predictions when given input series is multivariate
ben-jy
changed the title
evtl. review architecture (onTimeModel vs modelHolder, etc.)
Review architecture (onTimeModel vs modelHolder, etc.)
Aug 21, 2024
ben-jy
added this to the
v0.5.1 - Modelling and predictions milestone
Aug 21, 2024
ben-jy
changed the title
Review architecture (onTimeModel vs modelHolder, etc.)
Review architecture (onTimeModel vs BenchmarkModel, etc.)
Nov 27, 2024
ben-jy
changed the title
Review architecture (onTimeModel vs BenchmarkModel, etc.)
Review models architecture (onTimeModel vs BenchmarkModel, etc.)
Nov 27, 2024
ben-jy
changed the title
Review models architecture (onTimeModel vs BenchmarkModel, etc.)
Review models architecture
Nov 29, 2024
we do not need to specifically implement the possibility of loading of checkpoint, as we can let the specific wrapper to handle this with e.g. a "checkpoint" parameter.
According to potential models that will be benchmarked, and the way we will benchmark them, architecture should be reviewed. We must define if it makes sense to create a new interface for benchmarked models, or if we use interface/class already defined in ontime (AbstractModel/Model, see https://github.com/ontime-re/ontime/tree/develop/src/ontime/core/modelling)
For the benchmark models, we need :
This issue in linked to issue #46
The text was updated successfully, but these errors were encountered: