-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] Unify API #1023
base: main
Are you sure you want to change the base?
[FIX] Unify API #1023
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
…neuralforecast into fix/docs_and_refactoring
I ran my own small experiment, and models are slightly faster, with no degradation in performance. The refactoring looks good to me, but a second opinion would be great on this! |
Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i noticed that this file and action_files/test_models/src/evaluation2.py
are quite similar. i have a couple of suggestions:
- this might be a good opportunity to use the
utilsforecast
evaluation features. we could replacemae
andsmape
from thelosses
module and theevaluate
function. - also, it looks like the only difference between this file and the second one is the list of models, correct? if that’s the case, we could combine them into a single file and use fire to pass the list of models. you could then call it in the
.github/workflows/ci.yaml
file.
the idea would be to abstract the code in the if __name__ == '__main__':
clause, something like this:
def main(models: list):
groups = ['Monthly']
datasets = ['M3']
evaluation = [evaluate(model, dataset, group) for model, group in product(models, groups) for dataset in datasets]
evaluation = [eval_ for eval_ in evaluation if eval_ is not None]
evaluation = pd.concat(evaluation)
evaluation = evaluation[['dataset', 'model', 'time', 'mae', 'smape']]
evaluation['time'] /= 60 # minutes
evaluation = evaluation.set_index(['dataset', 'model']).stack().reset_index()
evaluation.columns = ['dataset', 'model', 'metric', 'val']
evaluation = evaluation.set_index(['dataset', 'metric', 'model']).unstack().round(3)
evaluation = evaluation.droplevel(0, 1).reset_index()
evaluation['AutoARIMA'] = [666.82, 15.35, 3.000]
evaluation.to_csv('data/evaluation.csv')
print(evaluation.T)
and then you could use fire inside the main clause:
if __name__ == '__main__':
import fire
fire.Fire(main)
this way, we can run it for different models inside .github/workflows/ci.yaml
: python -m action_files.test_models.src.evaluation --models <list of models>
. wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this also could apply to action_files/test_models/src/multivariate_evaluation.py
. since we are changing models and datasets, we could define main(models: list, dataset: str)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, one remarkt - why favour ci over circleci? (I'm ambivalent, don't know why we would prefer one over the other)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we could also merge action_files/test_models/src/models.py
, action_files/test_models/src/models2.py
, and action_files/test_models/src/multivariate_models.py
. from what i can tell, the only real difference between them is the list of models. if that’s the case, we could add a new parameter to main—maybe config: str
or something similar—and then have a dictionary with different models based on the config. for example: {"multivariate": , ...}, and then we could call it like this: python -m action_files.test_models.src.models --config multivariate
.
let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, then we can still fire up multiple runners (for the sake of keeping test time under control it makes sense to split the tests)
# from neuralforecast.models.rnn import RNN | ||
# from neuralforecast.models.tcn import TCN | ||
from neuralforecast.models.lstm import LSTM | ||
from neuralforecast.models.dilated_rnn import DilatedRNN | ||
from neuralforecast.models.deepar import DeepAR | ||
from neuralforecast.models.mlp import MLP | ||
from neuralforecast.models.nhits import NHITS | ||
from neuralforecast.models.nbeats import NBEATS | ||
# from neuralforecast.models.deepar import DeepAR | ||
# from neuralforecast.models.mlp import MLP | ||
# from neuralforecast.models.nhits import NHITS | ||
# from neuralforecast.models.nbeats import NBEATS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the commented code going to be restored in the future? if this change is permanent, maybe we could delete those lines instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kind of treating this file also as a testing file locally, we can delete it (it's mainly so that testing locally is faster that you don't need to type all that stuff every time)
" n_series = 1\n", | ||
" self.n_series = n_series \n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we set n_series = None
in this case? using n_series = 1
might lead to unexpected bugs, though i'm not entirely sure if that's the case here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need n_series=1
for the univariate models in a few other places, but I can also introduce additional helper variables to deal with that situation, but that feels a bit unnecessary perhaps....
This is a large refactoring PR and open for discussion. The main goal of the PR is to unify API across different model types, and unify loss functions across different loss types.
Refactoring:
BaseWindows
,BaseMultivariate
andBaseRecurrent
intoBaseModel
, removing the need for separate classes and unifying model API across different model types. Instead, this PR introduces two model attributes, yielding four possible model options:RECURRENT
(True
/False
) andMULTIVARIATE
(True
/False
). We currently have a model for every combination except a recurrent multivariate model (e.g. a multivariate LSTM), however this is now relatively simple to add. In addition, this change allows to have models that can be recurrent or not, or multivariate or not on-the-fly, based on users' input. This also allows for easier modelling going forward.domain_map
functions.loss.domain_map
outside of models toBaseModel
TSMixer
,TSMixerx
andRMoK
tocommon.modules
Features:
DistributionLoss
now supports the use ofquantiles
inpredict
, allowing for easy quantile retrieval for all DistributionLosses.GMM
,PMM
andNBMM
) now support learned weights for weighted mixture distribution outputs.quantiles
inpredict
, allowing for easy quantile retrieval.ISQF
by adding softplus protection around some parameters instead of using.abs
Bug fixes:
MASE
loss now works.StudentT
increase default DoF to 3 to reduce unbound variance issues.eval: false
on the examples whilst not having any other tests, causing most models to effectively not being tested at all.Breaking changes:
input_size
to be given.TCN
andDRNN
are now windows models, not recurrent models.Tests:
common._model_checks.py
that includes a model testing function. This function runs on every separate model, ensuring that every model is tested on push.Todo: