We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I just created a repo that will run weekly integration tests and found an error in hubEvals where four tests are failing:
══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-score_model_out.R:329:3'): score_model_out succeeds with valid inputs: nominal pmf output_type, default metrics, custom by ── Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", task_id_cols), observed = "observation", predicted = "value", model = "model", predicted_label = "output_type_id")`: unused argument (model = "model") Backtrace: ▆ 1. └─hubEvals::score_model_out(...) at test-score_model_out.R:329:3 2. └─hubEvals:::transform_pmf_model_out(...) at hubEvals/R/score_model_out.R:96:3 ── Error ('test-transform_pmf_model_out.R:16:3'): transform_pmf_model_out succeeds with valid inputs ── Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", task_id_cols), observed = "observation", predicted = "value", model = "model", predicted_label = "output_type_id")`: unused argument (model = "model") Backtrace: ▆ 1. └─hubEvals:::transform_pmf_model_out(...) at test-transform_pmf_model_out.R:16:3 ── Error ('test-transform_pmf_model_out.R:46:3'): transform_pmf_model_out doesn't depend on specific column names for task id variables ── Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", task_id_cols), observed = "observation", predicted = "value", model = "model", predicted_label = "output_type_id")`: unused argument (model = "model") Backtrace: ▆ 1. └─hubEvals:::transform_pmf_model_out(...) at test-transform_pmf_model_out.R:46:3 ── Error ('test-transform_pmf_model_out.R:59:3'): transform_pmf_model_out throws an error if model_out_tbl has no rows ── Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", task_id_cols), observed = "observation", predicted = "value", model = "model", predicted_label = "output_type_id")`: unused argument (model = "model") Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-transform_pmf_model_out.R:59:3 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. ├─base::suppressWarnings(...) 8. │ └─base::withCallingHandlers(...) 9. └─hubEvals:::transform_pmf_model_out(...) [ FAIL 4 | WARN 0 | SKIP 0 | PASS 64 ]
The source of this is epiforecasts/scoringutils#915, where the model parameter was removed from as_forecast_nominal()
model
as_forecast_nominal()
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
I just created a repo that will run weekly integration tests and found an error in hubEvals where four tests are failing:
Source
The source of this is epiforecasts/scoringutils#915, where the
model
parameter was removed fromas_forecast_nominal()
The text was updated successfully, but these errors were encountered: