-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathRasterTutorial.qmd
427 lines (297 loc) · 23.4 KB
/
RasterTutorial.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
---
title: "SDM Workflow for terrestrial species "
format: docx
author: "Rosie Stanbrook-Buyer"
---
#### Learning objectives
1\. Install packages for species distribution modeling
2\. Run species distribution models using a generalized linear model
3\. Visualize model predictions on a map
Species distribution modeling in becoming an increasingly important tool to understand how organisms might respond to current and future environmental changes. There is an ever growing number of approaches and literature for species distribution models (SDMs), and you are encouraged to check out the Additional Resources section at the bottom of this document for a few of these resources. The materials in the terra package are especially useful, and Jeremy Yoder’s introduction is another great place to start. In this tutorial, we’ll use publicly available data to build, evaluate, and visualize a distribution model for the saguaro cactus.
#### Workspace organization
First we need to setup our development environment. Although we have used Posit throughout this class for the purposes of this assignment you might want to consider switching to RStudio so as not to throttle the available RAM on your Posit account. You might also find that downloading and processing might be delayed using Posit.
Anyway, the choice is yours. Ive written the following instructions with RStudio in mind but the code can obviously be used in Posit, too.
Open RStudio and create a new project via:
- File \> New Project…
- Select ‘New Directory’
- For the Project Type select ‘New Project’
- For Directory name, call it something like “r-sdm” (without the quotes)
- For the subdirectory, select somewhere you will remember (like “My Documents” or “Desktop”)
We need to create two folders: ‘data’ will store the data we will be analyzing, and ‘output’ will store the results of our analyses. In the RStudio console:
```{r}
dir.create(path = "data")
dir.create(path = "output")
```
It is good practice to keep input (i.e. the data) and output separate. Furthermore, any work that ends up in the output folder should be completely disposable. That is, the combination of data and the code we write should allow us (or anyone else, for that matter) to reproduce any output.
#### Example data
The data we are working with are observations of the dung beetle, *Dichotomius carolinus.* We are using a cleaned set of records available from GBIF, the Global Biodiversity Information Facility that I downloaded last week. You can download the data from Canvas save it in the ‘data’ folder that you created in the step above.
#### Install additional R packages
Next, there are three additional R packages that will need to be installed:
- terra
- geodata
- predicts
To install these, run:
```{r}
install.packages("terra", repos = "https://cran.rstudio.com")
install.packages("geodata", repos = "https://cran.rstudio.com")
install.packages("predicts", repos = "https://cran.rstudio.com")
```
#### Components of the model
The basic idea behind species distribution models is to take two sources of information to model the conditions in which a species is expected to occur. The two sources of information are:
1. Occurrence data: these are usually latitude and longitude geographic coordinates where the species of interest has been observed. These are known as ‘presence’ data. Some models also make use of ‘absence’ data, which are geographic coordinates of locations where the species is known to not occur. Absence data are a bit harder to come by, but are required by some modeling approaches. For this lesson, we will use the occurrence data of the saguaro that you downloaded earlier.
2. Environmental data: these are descriptors of the environment, and can include abiotic measurements of temperature and precipitation as well as biotic factors, such as the presence or absence of other species (like predators, competitors, or food sources). In this lesson we will focus on the 19 abiotic variables available from WorldClim. Rather than downloading the data from WorldClim, we’ll use functions from the geodata package to download these data (see below).
#### Data and quality control
We’ll start our script by loading those five libraries we need.
```{r}
library(terra)
library(geodata)
library(predicts)
```
You might have seen some red messages print out to the screen. This is normal, and as long as none of the messages include “ERROR”, you can just hum right through those messages. If loading the libraries does result in an ERROR message, check to see that the libraries were installed properly.
Now that we have those packages loaded, we can download the bioclimatic variable data with the worldclim_global() function from the geodata package. Please be patient as you're downloading \~ 630 MB of data here.
```{r}
bioclim_data <- worldclim_global(var = "bio",
res = 2.5,
path = "data/")
```
We are giving the worldclim_global() function three critical pieces of information:
1. var = "bio": This tells worldclim_global() that we want to download all 19 of the bioclimatic variables, rather than individual temperature or precipitation measurements.
2. res = 2.5: This is the resolution of the data we want to download; in this case, it is 2.5 minutes of a degree. For other resolutions, you can check the documentation by typing ?worldclim_global into the console.
3. path = "data/": Finally, this sets the location to which the files are downloaded. In our case, it is the data folder we created at the beginning. Note also that after the files are downloaded to the data folder, they are read into memory and stored in the variable called bioclim_data.
Note also that after the files are downloaded to the data folder, they are also read into your memory and stored in the variable called bioclim_data.
Now the climate data are in memory, and next we need to load in the observations for our dung beetle *D. carolinus*:
```{r}
# Read in dung beetle observations
obs_data <- read.csv(file = "data/DC.csv")
```
```{r}
# Check the data to make sure it loaded correctly
summary(obs_data)
```
Although we don't have any NAs in the Lon and Lat variables in this dataset you might have some in your species dataset. Those records will not be of any use to you, so you can remove from our data frame:
```{r}
# Notice NAs - drop them before proceeding
obs_data <- obs_data[!is.na(obs_data$decimalLongitude), ]
obs_data <- obs_data[!is.na(obs_data$decimalLatitude), ]
# Make sure those NA's went away
summary(obs_data)
```
To make species distribution modeling more streamlined, it is useful to have an idea of how widely our species is geographically distributed. We are going to find general latitudinal and longitudinal boundaries and store this information for later use. We use the ceiling() and floor() to round up and down, respectively, to the nearest integer:
```{r}
# Determine geographic extent of our data
max_lat <- ceiling(max(obs_data$decimalLatitude))
min_lat <- floor(min(obs_data$decimalLatitude))
max_lon <- ceiling(max(obs_data$decimalLongitude))
min_lon <- floor(min(obs_data$decimalLongitude))
# Store boundaries in a single extent object
geographic_extent <- terra::ext(x = c(min_lon, max_lon, min_lat, max_lat))
```
Before we do any modeling, it is also a good idea to run a reality check on your occurrence data by plotting the points on a map.
```{r}
# Download data with geodata's world function to use for our base map
world_map <- world(resolution = 3,
path = "data/")
# Crop the map to our area of interest
my_map <- crop(x = world_map, y = geographic_extent)
# Plot the base map
plot(my_map,
axes = TRUE,
col = "grey90")
# Add the points for individual observations
points(x = obs_data$decimalLongitude,
y = obs_data$decimalLatitude,
col = "red",
pch = 20,
cex = 0.75)
```
#### Preparing data for modeling
Now that our occurrence data look OK, we can use the bioclimatic variables to create a model. The first thing we want to do though is limit our consideration to a reasonable geographic area. That is, for our purposes we are not looking to model *D.carolinus* habitat suitability globally, but rather to the general southeast region of North America/some of South America. We start by building an Extent object that is just a little larger (25% larger) than the extent of our dung beetle observations. We then crop the bioclimatic data to that larger, sampling extent. As a reality check we finish by plotting the cropped version of first bioclimatic variable.
```{r}
# Make an extent that is 25% larger
sample_extent <- geographic_extent * 1.25
# Crop bioclim data to desired extent
bioclim_data <- crop(x = bioclim_data, y = sample_extent)
# Plot the first of the bioclim variables to check on cropping
plot(bioclim_data[[1]])
```
Note my use of square brackets \[\[1\]\] to pull out the first bioclimatic variable from the bioclimatic variables which is mean temperature in degrees C.
In order to evaluate species distribution models, and really understand the factors influencing where this species of dung beetle occurs, we need to include some absence points, those sites where *D. carolinus* is known to not occur. The problem is, we only have presence data for *D.carolinus*.
#### The pseudo-absence point
One common work around for coercing presence-only data for use with presence/absence approaches is to use pseudo-absence, or “background” points. While “pseudo-absence” sounds fancy, it really just means that one randomly samples points from a given geographic area and treats them like locations where the species of interest is absent. A great resource investigating the influence and best practices of pseudo-absence points is a study by Barbet-Massin et al. (2012) (see Additional Resources below for full details).
For our purposes, we are going to create a set of 500 background (aka pseudo-absence) points at random, and add these to our data. We are going to use the bioclim data for determining spatial resolution of the points, and restrict the sampling area to the general region of the observations of *D.carolinus*. I chose a 500 pseudo-absence points as I had 500 observations in my dataset. A good rule of thumb is to use the same number of pseudo absence points as your presence points.
```{r}
# Set the seed for the random-number generator to ensure results are similar
set.seed(20210707)
# Randomly sample points (same number as our observed points)
background <- spatSample(x = bioclim_data,
size = 500, # generate 500 pseudo-absence points
values = FALSE, # don't need values
na.rm = TRUE, # don't sample from ocean
xy = TRUE) # just need coordinates
# Look at first few rows of background
head(background)
```
We can also plot those points, to see how the random sampling looks.
```{r}
# Plot the base map
plot(my_map,
axes = TRUE,
col = "grey90")
# Add the background points
points(background,
col = "grey30",
pch = 1,
cex = 0.75)
# Add the points for individual observations
points(x = obs_data$decimalLongitude,
y = obs_data$decimalLatitude,
col = "red",
pch = 20,
cex = 0.75)
```
We have observation data and pseudo-absence data and we need to first put them into one data structure, then add in the climate data so we have a single data frame with presence points and pseudo-absence points, and climate data for each of these points. It sounds like a lot, but after putting the two coordinate datasets (observations and pseudo-absence points) together, the terra package makes it easy to extract climate data.
When we put the observations and pseudo-absence points in the data frame, we need to make sure we know which is which - that is, we need a column to indicate whether a pair of latitude/longitude coordinates indicates a presence point or a (pseudo) absence point. So we start by preparing the two datasets:
```{r}
# Pull out coordinate columns, x (longitude) first, then y (latitude) from dung beetle data
presence <- obs_data[, c("decimalLongitude", "decimalLatitude")]
# Add column indicating presence
presence$pa <- 1
# Convert background data to a data frame
absence <- as.data.frame(background)
# Add column indicating absence
absence$pa <- 0
# Update column names in absence so they match column in presence
names(absence) <- c("decimalLongitude", "decimalLatitude","pa")
# Join data into single data frame by adding the data together by rows
all_points <- rbind(presence, absence)
# Reality check on data
head(all_points)
```
#### Adding climate data
We are now ready to add climate data to the coordinate data sets. As mentioned above, the terra package helps with this. We will use the extract() function, which takes geographic coordinates and raster data as input, and pulls out values in the raster data for each of the geographic coordinates.
```{r}
bioclim_extract <- extract(x = bioclim_data,
y = all_points[, c("decimalLongitude", "decimalLatitude")],
ID = FALSE) # No need for an ID column
```
The process of extracting data results in a data frame with the climate data, but that data frame doesn’t have the coordinate information and, more importantly, doesn’t indicate which rows are presence points and which rows are pseudo-absence points. So we need to join these extracted data back with our all_points data frame. After we do this, we do not need the latitude and longitude coordinates anymore, so we can drop those two columns (at least for building the model).
```{r}
points_climate <- cbind(all_points, bioclim_extract)
# Identify columns that are latitude & longitude
drop_cols <- which(colnames(points_climate) %in% c("decimalLongitude", "decimalLatitude"))
drop_cols # print the values as a reality check
# Remove the geographic coordinates from the data frame
points_climate <- points_climate[, -drop_cols]
```
#### Training and testing data
Now that we have climate data for our presence and pseudo-absence points, we need to take one more step. We are going to build our model using only part of our data, and use the “set aside” data to evaluate model performance afterward. This is known as separating our data into a training set (the data used to build the model) and a testing set (the data used to evaluate the model).
We are going to reserve 20% of the data for testing, so we use the folds() function from the predicts package to evenly assign each point to a random group. To make sure we have roughly representative sample of both presence and pseudo-absence points, we use the pa column to tell R that our data has these two sub-groups.
```{r}
# Create vector indicating fold
fold <- folds(x = points_climate,
k = 5,
by = points_climate$pa)
```
We now can use the fold vector to split data into a training set and a testing set. Values in the fold vector are the integers 1, 2, 3, 4, and 5, each evenly sampled; we can see this with the table() function, which counts how many times each of the fold values occurs:
```{r}
table(fold)
```
We will say that any observations in fold 1 will be testing data, and any observations in the other folds (2, 3, 4, 5) will be training data. Note that we did not add a column for fold identity in the points_climate data frame, but we can still use the information in the fold vector to separate training data from testing data.
```{r}
testing <- points_climate[fold == 1, ] # just fold 1
training <- points_climate[fold != 1, ] # everything but fold 1
```
#### Model building
Now that the data are ready, it is (finally!) time to build our species distribution model!
For this model, we will use the generalized linear model - it’s not the best and it’s not the worst, but it will work for us given that time is limited due to the semester we have had. If you want more information comparing different approaches, see references in the Additional Resources section below, especially the work of Valavi et al. 2021.
```{r}
# Build a model using training data
glm_model <- glm(pa ~ ., data = training, family = binomial())
```
OK, there’s some odd syntax in that glm() code that warrants some explanation, especially that pa \~ .. Here is the breakdown of the three pieces of information passed to glm():
- pa \~ . : This is the formula we are analyzing, that is we are asking R to predict the value in the pa column based on values in all the remaining columns. That is, instead of listing the names of all the bioclimatic variables (pa \~ bio1 + bio2 + bio3...), we can use the dot (.) to mean “all the columns except the column to the left of the tilda (\~).” • data = training: This tells R to use only the data stored in the training data frame to build the model. • family = binomial(): Because the response variable, pa, only takes values of 0 or 1, we need to indicate this to R.
- Now that we have built our model, we can use it to predict the habitat suitability across the entire map. We do this with the predict() function, passing the data to feed into the model (bioclim_data), the stored model itself (glm_model), and finally what values we want as output (type = "response").
- This last argument (type = "response") will return the predicted probabilities from our model. After calculating the predicted values, we can print them out with the plot() command.
```{r}
# Get predicted values from the model
glm_predict <- predict(bioclim_data, glm_model, type = "response")
# Print predicted values
plot(glm_predict)
```
OK, it is a map, but what does it mean? This plot shows the probability of occurrence of *D.carolinus* across the map. Note the values are all below 1.0.
We now take that model, and evaluate it using the observation data and the pseudo-absence points we reserved for model testing. We then use this test to establish a cutoff of occurrence probability to determine the boundaries of the *D.carolinus*' range.
```{r}
# Use testing data for model evaluation
glm_eval <- pa_evaluate(p = testing[testing$pa == 1, ],
a = testing[testing$pa == 0, ],
model = glm_model,
type = "response")
```
Here is another spot that warrants some additional explanation. We pass three pieces of information to the pa_evaluate() function:
- p = testing\[testing\$pa == 1, \]: In this case, p stands for presence data, so we pass all the rows in the testing data that correspond to a location where there was a saguaro present (that is, the value in the pa column is equal to 1).
- pa == 0, \]: Similarly, a stands for absence data, so we pass all the pseudo-absence rows in our dataset (i.e. all rows where the value in the pa column is equal to 0).
- model = glm_model: This is the model object we are evaluating. One way to think about this is that the glm_model is a calculator that takes bioclimatic data as input and provides probabilities as output.
With the pa_evaluate() function, we pass data that we “know” what the right answer should be for these probability calculations. That is, the glm_model should predict values close to 1 for those rows that we pass to the p argument (because we know that *D.carolinus* occurs at those locations) and it should predict values close to 0 for those rows that we pass to the **a** argument. We use this information on model performance to determine the probability value to use as a cutoff to saying whether a particular location is suitable or unsuitable for *D.carolinus*.
```{r}
# Determine minimum threshold for "presence"
glm_threshold <- glm_eval@thresholds$max_spec_sens
```
The thresholds element of glm_eval offers a number of means of determining the threshold cutoff. Here we chose max_spec_sens, which sets “the threshold at which the sum of the sensitivity (true positive rate) and specificity (true negative rate) is highest.”
For more information, check out the documentation for the pa_evaluate() function (?pa_evaluate, remember?). And finally, we can use that threshold to paint a map with sites predicted to be suitable for **D.carolinus**!
```{r}
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Only plot areas where probability of occurrence is greater than the threshold
plot(glm_predict > glm_threshold,
add = TRUE,
legend = FALSE,
col = "darkslategray4")
# And add those observations
points(x = obs_data$decimalLongitude,
y = obs_data$decimalLatitude,
col = "black",
pch = 19,
cex = 0.75)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
```
What?!
Hmmm…that doesn’t look right. It plotted a large portion of the map green. Let’s look at what we actually asked R to plot, that is, we plot the value of predict_presence \> bc_threshold. So what is that?
```{r}
glm_predict > glm_threshold
```
The comparison of these two rasters produces another raster with values of only FALSE or TRUE: FALSE when the value in a grid cell of glm_predict is less than or equal to the value in glm_threshold and TRUE for cells with a value greater than glm_threshold. Since there are two values in this comparison (FALSE and TRUE), we need to update what we pass to the col parameter in our second call to the plot() function. Instead of just passing a single value, we provide a color for 0 (NA) and a color for 1 ("darkslategray4"):
```{r}
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Only plot areas where probability of occurrence is greater than the threshold
plot(glm_predict > glm_threshold,
add = TRUE,
legend = FALSE,
col = c(NA, "darkslategray4")) # <-- Update the values HERE
# And add those observations
points(x = obs_data$deciamlLongitude,
y = obs_data$decimalLatitude,
col = "black",
pch = 19,
cex = 0.75)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
```
A final note on our approach: the map we have drawn presents a categorical classification of whether a particular point on the landscape will be suitable or not for the species of interest.
This classification relies quite heavily on the value of the threshold (see glm_threshold and the documentation for pa_evaluate()) and the pseudo-absence points. Given that we used random sampling to generate those pseudo-absence points, there is potential for variation in the predicted range if you run this code more than once (try it! if you re-run the code from the point of creating the pseudo-absence points, you are almost guaranteed a different map.). There are a number of approaches to dealing with this variation, and the paper by Barbet- Massin et al. (2012) is a great resource.
I’ll leave it as homework for you to determine which approach is most appropriate here!
Next Monday (11/18/2024) we will finish off our models with some forecasting. We will be making some predictions about our species' distributions under different climate scenario. Please come with your script prepared up to this point as we will be continuing directly from here.
#### Additional resources
- [The creators of the terra package have an excellent, in-depth guide to species distribution modeling in R ](https://rspatial.org/sdm/index.html)
- [A lighter-weight introduction to species distribution models in R ](https://www.molecularecologist.com/2013/04/23/species-distribution-models-in-r/)
- [ A really nice comparison among different SDM methods](https://esajournals.onlinelibrary.wiley.com/doi/10.1002/ecm.1486)
- [Fast and flexible Bayesian species distribution modelling using Gaussian processes](https://besjournals.onlinelibrary.wiley.com/doi/pdf/10.1111/2041-210X.12523)
- [Run a range of species distribution models](https://rdrr.io/cran/biomod2/man/BIOMOD_Modeling.html)
- [SDM polygons on a Google map](https://rdrr.io/rforge/dismo/man/gmap.html)
- [R package ‘maxnet’ for functionality of Java maxent package ](https://cran.r-project.org/web/packages/maxnet/maxnet.pdf)
- [A study on the effect of pseudo-absences in SDMs (Barbet-Massin et al. 2012)](https://besjournals.onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2011.00172.x)