-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix explain_forecast and implement adaptive within that framework. #405
Fix explain_forecast and implement adaptive within that framework. #405
Conversation
* move a lot of explain_forecast to the new functions. * drop old explain_forecast unused functions.
* Fix zzz temporary
I ran devtools::document() + added tests to be ran on GHA and pushed that. Not sure why it still does not run, though. Maybe due to a few merge conflicts? Anyway, I go quite a few failing tests locally. Would be great if you could take a look at those. I think they should not be too hard to fix. Also, maybe some of them gets resolved when rebasing to shapr-1.0.0. Once that is done I'll look more closely at your code. Overall, it looks great though :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review notes
- Function adjust_max_n_coalitions may need a specific version for forecast since grouping is typically applied but not always by the user and therefore it is hard to understand the output.
- Function cli_iter should have a separte forecast version as the number of used coalitions is not known before sampling, and the total number is only correct for the largest horizon.
Need to decide what to output in that case. - causual, asymmetric etc is skipped for forecast. This is perfectly fine
- Python version skipped for forecast? The main issue is that we need more general predict_model functions. The rest is not too hard, I think.
- Boostrapping is probably done incorrectly now. I think this must happen on X_list level, rather than the merged level. The easiest solution is probably to save X_list as well, and loop over that for the main functionality in the bootstrapping function. Should then do this by wrapping something like L284-L376 of compute_estimates.R in a function and then call that function several times for forecast, but only once for regular explain.
Remaining failing forecast test:
Output:
- ARIMA gives the same output with different horizons with grouping: Error: Can't assign 5 names to a 3-column data.table
- forecast_output_arima_numeric_no_lags: Lots of warnings. Not quite sure what is going on?
Setup:
- erroneous input:
max_n_coalitions: Error in ``[.data.table
(X_boot00, ,:=
(boot_id, rep(seq(n_boot_samps),
times = n_coalitions_boot/2)))`: Supplied 700 items to be assigned to 750 items of column 'boot_id'. If you wish to 'recycle' the RHS please use rep() to make this intent clear to readers of your code.
The setup fail is probably due to errors in the boostrapping approach. The others may also be related to that???
Maybe fix the boostrapping thing first?
We agreed on the following
|
* Fix bootstrapping for multiple horizons in explain_forecast.
…-rebase # Conflicts: # R/compute_estimates.R
Enable explain_forecast to comply with the adaptive/iterative estimation procedure.