-
Notifications
You must be signed in to change notification settings - Fork 23
/
Copy pathREADME.Rmd
140 lines (105 loc) · 8.27 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
title: 'scoringutils: Utilities for Scoring and Assessing Predictions'
output:
github_document:
toc: false
---
<!-- badges: start -->
[](https://github.com/epiforecasts/scoringutils/actions/workflows/R-CMD-check.yaml)
[](https://app.codecov.io/gh/epiforecasts/scoringutils)
[](https://CRAN.R-project.org/package=scoringutils)

[](https://cran.r-project.org/package=scoringutils)
<!-- badges: end -->
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE,
fig.width = 7,
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/")
library(scoringutils)
library(magrittr)
library(data.table)
library(ggplot2)
library(knitr)
```
The `scoringutils` package provides a collection of metrics and proper scoring rules and aims to make it simple to score probabilistic forecasts against observed values.
You can find additional information and examples in the papers [Evaluating Forecasts with scoringutils in R](https://arxiv.org/abs/2205.07090) [Scoring epidemiological forecasts on transformed scales](https://www.medrxiv.org/content/10.1101/2023.01.23.23284722v1) as well as the Vignettes ([Getting started](https://epiforecasts.io/scoringutils/articles/scoringutils.html), [Details on the metrics implemented](https://epiforecasts.io/scoringutils/articles/metric-details.html) and [Scoring forecasts directly](https://epiforecasts.io/scoringutils/articles/scoring-forecasts-directly.html)).
The `scoringutils` package offers convenient automated forecast evaluation through the function `score()`. The function operates on data.frames (it uses `data.table` internally for speed and efficiency) and can easily be integrated in a workflow based on `dplyr` or `data.table`. It also provides experienced users with a set of reliable lower-level scoring metrics operating on vectors/matrices they can build upon in other applications. In addition it implements a wide range of flexible plots designed to cover many use cases.
Where available `scoringutils` depends on functionality from `scoringRules` which provides a comprehensive collection of proper scoring rules for predictive probability distributions represented as sample or parametric distributions. For some forecast types, such as quantile forecasts, `scoringutils` also implements additional metrics for evaluating forecasts. On top of providing an interface to the proper scoring rules implemented in `scoringRules` and natively, `scoringutils` also offers utilities for summarising and visualising forecasts and scores, and to obtain relative scores between models which may be useful for non-overlapping forecasts and forecasts across scales.
Predictions can be handled in various formats: `scoringutils` can handle probabilistic forecasts in either a sample based or a quantile based format. For more detail on the expected input formats please see below. True values can be integer, continuous or binary, and appropriate scores for each of these value types are selected automatically.
## Installation
Install the CRAN version of this package using:
```{r, eval = FALSE}
install.packages("scoringutils")
```
Install the stable development version of the package with:
```{r, eval = FALSE}
install.packages("scoringutils", repos = "https://epiforecasts.r-universe.dev")
```
Install the unstable development from GitHub using the following,
```{r, eval = FALSE}
remotes::install_github("epiforecasts/scoringutils", dependencies = TRUE)
```
## Quick start
In this quick start guide we explore some of the functionality of the `scoringutils` package using quantile forecasts from the [ECDC forecasting hub](https://covid19forecasthub.eu/) as an example. For more detailed documentation please see the package vignettes, and individual function documentation.
### Plotting forecasts
As a first step to evaluating the forecasts we visualise them. For the purposes of this example here we make use of `plot_predictions()` to filter the available forecasts for a single model, and forecast date.
```{r, fig.width = 9, fig.height = 6}
example_quantile %>%
make_NA(what = "truth",
target_end_date >= "2021-07-15",
target_end_date < "2021-05-22"
) %>%
make_NA(what = "forecast",
model != "EuroCOVIDhub-ensemble",
forecast_date != "2021-06-28"
) %>%
plot_predictions(
x = "target_end_date",
by = c("target_type", "location")
) +
facet_wrap(target_type ~ location, ncol = 4, scales = "free")
```
### Scoring forecasts
Forecasts can be easily and quickly scored using the `score()` function. `score()` automatically tries to determine the `forecast_unit`, i.e. the set of columns that uniquely defines a single forecast, by taking all column names of the data into account. However, it is recommended to set the forecast unit manually using `set_forecast_unit()` as this may help to avoid errors, especially when scoringutils is used in automated pipelines. The function `set_forecast_unit()` will simply drop unneeded columns. To verify everything is in order, the function `validate()` should be used. The result of that check can then passed directly into `score()`. `score()` returns unsummarised scores, which in most cases is not what the user wants. Here we make use of additional functions from `scoringutils` to add empirical coverage-levels (`add_coverage()`), and scores relative to a baseline model (here chosen to be the EuroCOVIDhub-ensemble model). See the getting started vignette for more details. Finally we summarise these scores by model and target type.
```{r score-example}
example_quantile %>%
set_forecast_unit(c("location", "target_end_date", "target_type", "horizon", "model")) %>%
validate() %>%
add_coverage() %>%
score() %>%
summarise_scores(
by = c("model", "target_type")
) %>%
add_pairwise_comparison(
baseline = "EuroCOVIDhub-ensemble"
) %>%
summarise_scores(
fun = signif,
digits = 2
) %>%
kable()
```
`scoringutils` contains additional functionality to transform forecasts, to summarise scores at different levels, to visualise them, and to explore the forecasts themselves. See the package vignettes and function documentation for more information.
You may want to score forecasts based on transformations of the original data in order to obtain a more complete evaluation (see [this paper](https://www.medrxiv.org/content/10.1101/2023.01.23.23284722v1) for more information). This can be done using the function `transform_forecasts()`. In the following example, we truncate values at 0 and use the function `log_shift()` to add 1 to all values before applying the natural logarithm.
```{r}
example_quantile %>%
.[, observed := ifelse(observed < 0, 0, observed)] %>%
transform_forecasts(append = TRUE, fun = log_shift, offset = 1) %>%
score %>%
summarise_scores(by = c("model", "target_type", "scale")) %>%
head()
```
## Citation
If using `scoringutils` in your work please consider citing it using the output of `citation("scoringutils")`:
```{r, echo = FALSE}
citation("scoringutils")
```
## How to make a bug report or feature request
Please briefly describe your problem and what output you expect in an [issue](https://github.com/epiforecasts/scoringutils/issues). If you have a question, please don't open an issue. Instead, ask on our [Q and A page](https://github.com/epiforecasts/scoringutils/discussions/categories/q-a).
## Contributing
We welcome contributions and new contributors! We particularly appreciate help on priority problems in the [issues](https://github.com/epiforecasts/scoringutils/issues). Please check and add to the issues, and/or add a [pull request](https://github.com/epiforecasts/scoringutils/pulls).
## Code of Conduct
Please note that the `scoringutils` project is released with a [Contributor Code of Conduct](https://epiforecasts.io/scoringutils/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms.