-
Notifications
You must be signed in to change notification settings - Fork 23
/
37-Spatially-Continuous-Data-IV.Rmd
391 lines (317 loc) · 15.3 KB
/
37-Spatially-Continuous-Data-IV.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
# Spatially Continuous Data IV
*NOTE*: The source files for this book are available with companion package [{isdas}](https://paezha.github.io/isdas/). The source files are in Rmarkdown format and packed as templates. These files allow you execute code within the notebook, so that you can work interactively with the notes.
## Learning objectives
In the previous practice you were introduced to the concept of variographic analysis for fields/spatially continuous data. In this practice, we will learn:
1. How to use residual spatial pattern to estimate prediction errors.
2. Kriging: a method for optimal predictions.
## Suggested reading
- Bailey TC and Gatrell AC [-@Bailey1995] Interactive Spatial Data Analysis, Chapters 5 and 6. Longman: Essex.
- Bivand RS, Pebesma E, and Gomez-Rubio V [-@Bivand2008] Applied Spatial Data Analysis with R, Chapter 8. Springer: New York.
- Brunsdon C and Comber L [-@Brunsdon2015R] An Introduction to R for Spatial Analysis and Mapping, Chapter 6, Sections 6.7 and 6.8. Sage: Los Angeles.
- Isaaks EH and Srivastava RM [-@Isaaks1989applied] An Introduction to Applied Geostatistics, Chapter 12. Oxford University Press: Oxford.
- O'Sullivan D and Unwin D [-@Osullivan2010] Geographic Information Analysis, 2nd Edition, Chapters 9 and 10. John Wiley & Sons: New Jersey.
## Preliminaries
As usual, remember to begin work with a new session of `R`. At least, remembert to clear the working space to make sure that you do not have extraneous items there when you begin your work. The command in `R` to clear the workspace is `rm` (for "remove"), followed by a list of items to be removed. To clear the workspace from _all_ objects, do the following:
```{r}
rm(list = ls())
```
Note that `ls()` lists all objects currently on the workspace.
Load the libraries you will use in this activity:
```{r message=FALSE, warning=FALSE}
library(isdas)
library(gstat)
library(plotly)
library(spdep)
library(tidyverse)
library(stars)
```
Begin by loading the data file:
```{r}
data("Walker_Lake")
```
You can verify the contents of the dataframe:
```{r}
summary(Walker_Lake)
```
## Using residual spatial pattern to estimate prediction errors
Previously, in Chapter \@ref{spatially-continuous-data-ii} we discussed how to interpolate a field using trend surface analysis; we also saw how that method may lead to residuals that are not spatially independent.
The implication of non-random residuals is that there is systematic residual pattern that the model did not capture; This, in turn, means that there is at least _some_ information that can still be extracted from the residuals. Again, we will use the case of Walker Lake to explore one way to do this.
As before, we first calculate the polynomial terms of the coordinates to fit a trend surface to the data:
```{r}
Walker_Lake <- mutate(Walker_Lake,
X3 = X^3, X2Y = X^2 * Y, X2 = X^2,
XY = X * Y,
Y2 = Y^2, XY2 = X * Y^2, Y3 = Y^3)
```
Given the polynomial expansion, we can proceed to estimate the following cubic trend surface model, which we already know provided the best fit to the data:
```{r}
WL.trend3 <- lm(formula = V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3,
data = Walker_Lake)
summary(WL.trend3)
```
We can next visualize the residuals which, as you can see, do not appear to be random
```{r}
plot_ly(x = ~Walker_Lake$X,
y = ~Walker_Lake$Y,
z = ~WL.trend3$residuals,
color = ~WL.trend3$residuals < 0,
colors = c("blue", "red"),
type = "scatter3d")
```
Now we will create an interpolation grid:
```{r}
# The function `sequence()` create a sequence of values from - to
# using by as the step increment. In this case, we generate a grid
# with points that are 2.5 m apart.
X.p <- seq(from = 0.1, to = 255.1, by = 2.5)
Y.p <- seq(from = 0.1, to = 295.1, by = 2.5)
df.p <- expand.grid(X = X.p, Y = Y.p)
```
WE can add the polynomial terms to this grid. Since our trend surface model was estimated using the cubic polynomial, we add those terms to the dataframe:
```{r}
df.p <- mutate(df.p, X3 = X^3, X2Y = X^2 * Y, X2 = X^2,
XY = X * Y,
Y2 = Y^2, XY2 = X * Y^2, Y3 = Y^3)
```
The interpolated cubic surface is obtained by using the model and the interpolation grid as `newdata`:
```{r}
# The function `predict()` is used to make predictions given a model
# and a possibly new dataset, different from the one used for estimation
# of the model.
WL.preds3 <- predict(WL.trend3,
newdata = df.p,
se.fit = TRUE,
interval = "prediction",
level = 0.95)
```
The surface is converted into a matrix for 3D plotting:
```{r}
z.p3 <- matrix(data = WL.preds3$fit[,1],
nrow = length(Y.p),
ncol = length(X.p),
byrow = TRUE)
```
And plot:
```{r}
WL.plot3 <- plot_ly(x = ~X.p,
y = ~Y.p,
z = ~z.p3,
type = "surface",
colors = "YlOrRd") |>
layout(scene = list(aspectmode = "manual",
aspectratio = list(x = 1,
y = 1,
z = 1)))
WL.plot3
```
The trend surface provides a smooth estimate of the field. However, it is not sufficient to capture all systematic variation, and fails to produce random residuals.
A possible way of enhancing this approach to interpolation is to _exploit_ the information that remains in the residuals, for instance by the use of $k$-point means.
We can illustrate this as follows. To interpolate the _residuals_, we first need the set of _target_ points (the points for the interpolation), as well as the _source_ (the observations):
```{r}
# We will use the prediction grid we used above to interpolate the residuals
target_xy = expand.grid(x = X.p,
y = Y.p) |>
st_as_sf(coords = c("x", "y"))
# Convert the `Walker_Lake` dataframe to a simple features object using as follows:
Walker_Lake.sf <- Walker_Lake |>
st_as_sf(coords = c("X", "Y"))
# Append the residuals to the table
Walker_Lake.sf$residuals <- WL.trend3$residuals
```
It is possible now to use the `kpointmean` function to interpolate the residuals, for instance using $k=5$ neighbors:
```{r}
kpoint.5 <- kpointmean(source_xy = Walker_Lake.sf,
target_xy = target_xy,
z = residuals,
k = 5)
```
Given the interpolated residuals, we can join them to the cubic trend surface, as follows:
```{r}
z.p3 <- matrix(data = WL.preds3$fit[,1] + kpoint.5$z,
nrow = length(Y.p),
ncol = length(X.p),
byrow = TRUE)
```
This is now the interpolated field that combines the trend surface and the estimated residuals:
```{r}
WL.plot3 <- plot_ly(x = ~X.p,
y = ~Y.p,
z = ~z.p3,
type = "surface",
colors = "YlOrRd") |>
layout(scene = list(aspectmode = "manual",
aspectratio = list(x = 1,
y = 1,
z = 1)))
WL.plot3
```
Of all the approaches that we have seen so far, this is the first that provides a genuine estimate of the following:
$$
\hat{z}_p + \hat{\epsilon}_p
$$
With trend surface analysis providing a smooth estimator of the underlying field:
$$
\hat{z}_p = f(x_p, y_p)
$$
And $k$-point means providing an estimator of:
$$
\hat{\epsilon}_p
$$
A question is how to decide the number of neighbors to use in the calculation of the $k$-point means. As previously discussed, $k$=1 becomes identical to Voronoi polygons, and $k = n$ becomes the global mean.
A second question concerns the way the average is calculated. As variographic analysis demonstrates, it is possible to estimate the way in which spatial dependence weakens with distance. Why should more distant points be weighted equally? The answer is, there is no reason why they should, and in fact, variographic analysis elegantly solves this, as well the question of how many points to use: all of them, with varying weights.
Next, we will introduce kriging, a method for optimal prediction that is based on the use of variographic analysis.
## Kriging: a method for optimal prediction.
To introduce the method known as kriging, we will begin by positing a situation as follows:
$$
\hat{z}_p + \hat{\epsilon}_p = \hat{f}(x_p, y_p) + \hat{\epsilon}_p
$$
where $\hat{f}(x_p, y_p)$ is a smooth estimator of an underlying field.
We aim to predict $\hat{\epsilon}_p$ based on the observed residuals. We use an expression similar to the one used for IDW and $k$-point means in Chapter \@ref{spatially-continuous-data-i} (we will use $\lambda$ for the weights to avoid confusing the the weights in variographic analysis):
$$
\hat{\epsilon}_p = \sum_{i=1}^n {\lambda_{pi}\epsilon_i}
$$
That is, $\hat{\epsilon}_p$ is a linear combination of the prediction residuals from the trend:
$$
\epsilon_i = z_i - \hat{f}(x_i, y_i)
$$
It is possible to define the following _expected mean squared error_, or _prediction variance_:
$$
\sigma_{\epsilon}^2 = E[(\hat{\epsilon}_p - \epsilon_i)^2]
$$
The prediction variance measures how close, on average, the prediction error is to the residuals.
The prediction variance can be decomposed as follows:
$$
\sigma_{\epsilon}^2 = E[\hat{\epsilon}_p] + E[\epsilon_i] - 2E[\hat{\epsilon_i\epsilon}_p]
$$
It turns out (we will not show the detailed derivation, but it can be consulted [here](https://msu.edu/~ashton/classes/866/papers/gatrell_ordkrige.pdf)), that the expression for the prediction variance depends on the weights:
$$
\sigma_{\epsilon}^2 = \sum_{i=1}^n \sum_{j=1}^n{\lambda_{ip}\lambda_{jp}C_{ij}} + \sigma^2 + 2\sum_{i=1}^{n}{\lambda_{ip}C_{ip}}
$$
where $C_{ij}$ is the autocovariance between observations at $i$ and $j$, and $C_{ip}$ is the autocovariance between the observation at $i$ and prediction location $p$.
Fortunately for us, the semivariogram and the autocovariance is straightforward:
$$
C_{z}(h) =\sigma^2 - \hat{\gamma}_{z}(h)
$$
This means that, given the distance $h$ between $i$ and $j$, and $i$ and $p$, we can use a semivariogram to obtain the autocovariances needed to calculate the prediction variance. We are still missing, however, the weights $\lambda$, which are not known a priori.
These weights can be obtained if we use the following rules:
> The expectation of the prediction errors is zero (unbiassedness)
> Find the weights $lambda$ that minimize the prediction variance (optimal estimator).
This makes sense, since we would like our predictions to be unbiased (i.e., accurate) and as precise as possible, that is, to have the smallest variance (recall the discussion about accuracy and precision in Chapter \@ref{spatially-continuous-data-iii}).
Again, solving the minimization problem is beyond the scope of our presentation, but it suffices to say that the result is as follows:
$$
\mathbf{\lambda}_p = \mathbf{C}^{-1}\mathit{c}_{p}
$$
where $\mathbf{C}$ is the covariance matrix, and $\mathit{c}_{p}$ is the covariance vector for location $p$.
In summary, kriging is a method to optimally estimate the value of a variable at $p$ as a weighted sum of the observations of the same variable at locations $i$. This method is known to have the properties of **B**est (in the sense that it minimizes the variance) **L**inear (because predictions are a linear combination of weights) **U**nbiased (since the estimators of the prediction errors are zero) **P**redictor, or **BLUP**.
Kriging is implemented in the package `gstat` as follows.
To put kriging to work we must first conduct variographic analysis of the residuals. The function `variogram` uses as an argument a simple features object that we can create as follows:
```{r}
Walker_Lake.sf <- Walker_Lake |>
st_as_sf(coords = c("X", "Y"),
# Remove set to false to retain the X and Y coordinates
# in the dataframe after they are converted to simple features
remove = FALSE)
```
The variogram of the residuals can be obtained by specifying a trend surface in the formula:
```{r}
variogram_v <- variogram(V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3,
data = Walker_Lake.sf)
# Plot
ggplot(data = variogram_v,
aes(x = dist,
y = gamma)) +
geom_point() +
geom_text(aes(label = np),
# Nudge the labels away from the points
nudge_y = -1500) +
xlab("Distance") +
ylab("Semivariance")
```
You can verify that the semivariogram above corresponds to the residuals by repeating the analysis directly on the residuals. First join the residuals to the `SpatialPointsDataFrame`:
```{r}
Walker_Lake.sf$e <- WL.trend3$residuals
```
And then calculate the semivariogram and plot:
```{r}
variogram_e <- variogram(e ~ 1,
data = Walker_Lake.sf)
# Plot
ggplot(data = variogram_e,
aes(x = dist,
y = gamma)) +
geom_point() +
geom_text(aes(label = np),
nudge_y = -1500) +
xlab("Distance") +
ylab("Semivariance")
```
The empirical semivariogram is used to estimate a semivariogram function:
```{r}
variogram_v.t <- fit.variogram(variogram_v, model = vgm("Exp", "Sph", "Gau"))
variogram_v.t
```
The variogram function plots as follows:
```{r}
gamma.t <- variogramLine(variogram_v.t, maxdist = 130)
# Plot
ggplot(data = variogram_v,
aes(x = dist,
y = gamma)) +
geom_point(size = 3) +
geom_line(data = gamma.t,
aes(x = dist,
y = gamma)) +
xlab("Distance") +
ylab("Semivariance")
```
We will convert the prediction grid to a simple features object:
```{r}
df.sf <- df.p |>
st_as_sf(coords = c("X", "Y"),
remove = FALSE)
```
Then, we can krige the field as follows (ensure that packages `sf` and `stars` are installed):
```{r}
V.kriged <- krige(V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3,
Walker_Lake.sf,
df.sf,
variogram_v.t)
```
Extract the predictions and prediction variance from the object `V.kriged`:
```{r}
V.km <- matrix(data = V.kriged$var1.pred,
nrow = 119,
ncol = 103,
byrow = TRUE)
V.sm <- matrix(data = V.kriged$var1.var,
nrow = 119,
ncol = 103,
byrow = TRUE)
```
We can now plot the interpolated field:
```{r}
V.km.plot <- plot_ly(x = ~X.p,
y = ~Y.p,
z = ~V.km,
type = "surface",
colors = "YlOrRd") |>
layout(scene = list(aspectmode = "manual",
aspectratio = list(x = 1,
y = 1,
z = 1)))
V.km.plot
```
Also, we can plot the kriging standard errors (the square root of the prediction variance). This gives an estimate of the uncertainty in the predictions:
```{r}
V.sm.plot <- plot_ly(x = ~X.p,
y = ~Y.p,
z = ~sqrt(V.sm),
type = "surface",
colors = "YlOrRd") |>
layout(scene = list(aspectmode = "manual",
aspectratio = list(x = 1,
y = 1,
z = 1)))
V.sm.plot
```
Where are predictions more/less precise?