Skip to content

Commit

Permalink
Fix typos (microsoft#236)
Browse files Browse the repository at this point in the history
* Fixed typos

* Restore tutorials/xarray-spatial_classification-methods.ipynb

Co-authored-by: Tom Augspurger <[email protected]>
  • Loading branch information
giswqs and Tom Augspurger authored Dec 12, 2022
1 parent fdfe2de commit 1bd0f35
Show file tree
Hide file tree
Showing 38 changed files with 53 additions and 53 deletions.
4 changes: 2 additions & 2 deletions competitions/cloud-cover/benchmark-tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The training data consists of 11,748 \"chips\". Each chip is imagery of a specific area captured at a specific point in time. There are four images associated with each chip in the competition data. Each image within a chip captures light from a different range of wavelengths, or \"band\". For example, the B02 band for each chip shows the strengh of visible blue light, which has a wavelength around 492 nanometers (nm). The bands provided are:\n",
"The training data consists of 11,748 \"chips\". Each chip is imagery of a specific area captured at a specific point in time. There are four images associated with each chip in the competition data. Each image within a chip captures light from a different range of wavelengths, or \"band\". For example, the B02 band for each chip shows the strength of visible blue light, which has a wavelength around 492 nanometers (nm). The bands provided are:\n",
"\n",
"\n",
"<table border=\"1\" class=\"table\" style=\"width:70%; margin-left:auto; margin-right:auto\">\n",
Expand Down Expand Up @@ -1773,7 +1773,7 @@
"\n",
"- `__init__`: how to instantiate a `CloudModel` class\n",
"\n",
"- `forward`: forward pass for an image in the neural network propogation\n",
"- `forward`: forward pass for an image in the neural network propagation\n",
"\n",
"- `training_step`: switch the model to train mode, implement the forward pass, and calculate training loss (cross-entropy) for a batch\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions datasets/3dep/3dep-seamless-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@
"metadata": {},
"source": [
"The Planetary Computer hosts both the 1 arc-second (nominal 30m pixel size) and 1/3 arc-second (nominal 10m pixel size) resolution 3DEP data.\n",
"Let's seperate the two sets into their own lists by filtering on the Ground Sample Distance (GSD) in the STAC items.\n",
"Let's separate the two sets into their own lists by filtering on the Ground Sample Distance (GSD) in the STAC items.\n",
"\n",
"We'll also [sign the assets](../quickstarts/reading-stac.ipynb) before downloading, which can be done with or without a Planetary Computer subscription key."
]
Expand All @@ -140,7 +140,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"DEMs are relatively simple raster datsets — they only have one band (elevation). Let's compare visualizations of high and low resolution images for our area of interest.\n",
"DEMs are relatively simple raster datasets — they only have one band (elevation). Let's compare visualizations of high and low resolution images for our area of interest.\n",
"We'll read the items into a DatayArray using [`stackstac`](https://stackstac.readthedocs.io/), taking care to crop the larger assets down to our area of interest."
]
},
Expand Down
2 changes: 1 addition & 1 deletion datasets/cil-gdpcir/cil-gdpcir-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"source": [
"### STAC Metadata\n",
"\n",
"The [CIL-GDPR datsets](https://planetarycomputer.microsoft.com/dataset/group/cil-gdpcir) are grouped into several collections, depending on the license the data are provided under.\n",
"The [CIL-GDPR datasets](https://planetarycomputer.microsoft.com/dataset/group/cil-gdpcir) are grouped into several collections, depending on the license the data are provided under.\n",
"\n",
"- [CIL-GDPCIR-CC0](https://planetarycomputer.microsoft.com/dataset/cil-gdpcir-cc0)\n",
"- [CIL-GDPCIR-CC-BY](https://planetarycomputer.microsoft.com/dataset/cil-gdpcir-cc-by)\n",
Expand Down
2 changes: 1 addition & 1 deletion datasets/cil-gdpcir/ensemble.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
"source": [
"### Understanding the GDPCIR collections\n",
"\n",
"The [CIL-GDPCIR datsets](https://planetarycomputer.microsoft.com/dataset/group/cil-gdpcir) are grouped into several collections, depending on the license the data are provided under.\n",
"The [CIL-GDPCIR datasets](https://planetarycomputer.microsoft.com/dataset/group/cil-gdpcir) are grouped into several collections, depending on the license the data are provided under.\n",
"\n",
"- [CIL-GDPCIR-CC0](https://planetarycomputer.microsoft.com/dataset/cil-gdpcir-cc0) - provided in public domain using a [CC 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)\n",
"- [CIL-GDPCIR-CC-BY](https://planetarycomputer.microsoft.com/dataset/cil-gdpcir-cc-by) - provided under a [CC Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/)\n",
Expand Down
2 changes: 1 addition & 1 deletion datasets/cil-gdpcir/indicators.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1925,7 +1925,7 @@
"id": "27efedee-ac3e-4588-bf8b-cbcc068fae03",
"metadata": {},
"source": [
"Here, the state data requirement has been reduced significantly - but careful - this is the size required by the final product *once computed*. But this is a scheduled [dask](https://docs.xarray.dev/en/latest/user-guide/dask.html) operation, and because of dask's [Lazy Evaluation](https://tutorial.dask.org/01x_lazy.html), we haven't done any work yet. Dask is waiting for us to require operations, e.g. by calling `.compute()`, `.persist()`, or because of blocking opreations like writing to disk or plotting. Until we do one of those, we haven't actually read any data yet!\n",
"Here, the state data requirement has been reduced significantly - but careful - this is the size required by the final product *once computed*. But this is a scheduled [dask](https://docs.xarray.dev/en/latest/user-guide/dask.html) operation, and because of dask's [Lazy Evaluation](https://tutorial.dask.org/01x_lazy.html), we haven't done any work yet. Dask is waiting for us to require operations, e.g. by calling `.compute()`, `.persist()`, or because of blocking operations like writing to disk or plotting. Until we do one of those, we haven't actually read any data yet!\n",
"\n",
"### Loading a subset of the data\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion datasets/daymet/daymet-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3591,7 +3591,7 @@
"source": [
"### Analyze and plot North America\n",
"\n",
"North America is considerably larger than the Hawaii or Puerto Rico dataset, so let's downsample a bit for quicker plotting. We'll also start up a Dask cluster to do reads and processing in parallel. If you're running this on the Hub, use the following URL in the Dask Extension to see progress. If you're not running it on the hub, you can use a `distributed.LocalCluster` to acheive the same result (but it will take longer, since it's running on a single machine)."
"North America is considerably larger than the Hawaii or Puerto Rico dataset, so let's downsample a bit for quicker plotting. We'll also start up a Dask cluster to do reads and processing in parallel. If you're running this on the Hub, use the following URL in the Dask Extension to see progress. If you're not running it on the hub, you can use a `distributed.LocalCluster` to achieve the same result (but it will take longer, since it's running on a single machine)."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions datasets/deltares-floods/deltares-floods-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@
"source": [
"### Data access\n",
"\n",
"The entire dataset is made up of several dozen individual netCDF files, each representing an entire global inundation map, but derived from either a diferent source DEM, sea level rise condition, or return period. Return periods are occurence probabilities for floods of a particular magnitude, often referred to as, for example, \"a 100 year flood\". Use the STAC API to query on these various properties:\n",
"The entire dataset is made up of several dozen individual netCDF files, each representing an entire global inundation map, but derived from either a different source DEM, sea level rise condition, or return period. Return periods are occurrence probabilities for floods of a particular magnitude, often referred to as, for example, \"a 100 year flood\". Use the STAC API to query on these various properties:\n",
"\n",
"To start, we'll load and plot the inundation data produced from the 90m NASADEM at a 100 year return period for 2050 sea level rise conditions. "
]
Expand Down Expand Up @@ -1885,7 +1885,7 @@
}
],
"source": [
"# Concat the two datasets along the time dimention\n",
"# Concat the two datasets along the time dimension\n",
"mds = xr.concat([ds_2018_myanmar, ds_myanmar], dim=\"time\")\n",
"\n",
"# Time coordinates are not set in the data files. Set them correctly\n",
Expand Down
2 changes: 1 addition & 1 deletion datasets/ecmwf-forecast/ecmwf-forecast-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -659,7 +659,7 @@
"id": "baa890b6-0ca6-4ca9-8b27-6954099a249d",
"metadata": {},
"source": [
"If we provided just the `filename` to `xarray.open_datset`, we'd get an error from `cfgrib` saying it can't form a valid DataArray from the file. That's because the GRIB2 file contains multiple data variables that don't form a neat hypercube. Provide `filter_by_keys` to indicate which subset of the data to read in."
"If we provided just the `filename` to `xarray.open_dataset`, we'd get an error from `cfgrib` saying it can't form a valid DataArray from the file. That's because the GRIB2 file contains multiple data variables that don't form a neat hypercube. Provide `filter_by_keys` to indicate which subset of the data to read in."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion datasets/era5/era5-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@
"id": "34d4b90c-01bf-4fc6-a060-4bd19eff6bc4",
"metadata": {},
"source": [
"There are several assets avaiable, one for each data variable. We can build up a dataset with all the variables using `xarray.open_dataset` and `combine_by_coords`."
"There are several assets available, one for each data variable. We can build up a dataset with all the variables using `xarray.open_dataset` and `combine_by_coords`."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion datasets/gridmet/gridmet-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2019,7 +2019,7 @@
"id": "f50bb30f",
"metadata": {},
"source": [
"The video will only be embeded if you're running the notebook interactively. If you're just reading this example, you can see the output at\n",
"The video will only be embedded if you're running the notebook interactively. If you're just reading this example, you can see the output at\n",
"\n",
"<video src=\"https://ai4edatasetspublicassets.blob.core.windows.net/assets/pc_video/pc-examples-gridmet-air-temperature.webm\" controls>"
]
Expand Down
2 changes: 1 addition & 1 deletion datasets/hrea/hrea-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@
"source": [
"### Read a window\n",
"\n",
"Cloud Optimized GeoTIFFs (COGs) allows us to effeciently download and read sections of a file, rather than the entire file, when only part of the region is required. The COGs are stored on disk with an internal set of windows. You can read sections of any shape and size, but reading them in the file-defined window size is most efficient. Let's read the same asset, but this time only request the second window. "
"Cloud Optimized GeoTIFFs (COGs) allows us to efficiently download and read sections of a file, rather than the entire file, when only part of the region is required. The COGs are stored on disk with an internal set of windows. You can read sections of any shape and size, but reading them in the file-defined window size is most efficient. Let's read the same asset, but this time only request the second window. "
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion datasets/io-lulc/io-lulc-9-class-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@
"source": [
"### Select a region and find data items\n",
"\n",
"We'll pick an area in Thailand and use the STAC API to find what data items are avaialable."
"We'll pick an area in Thailand and use the STAC API to find what data items are available."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion datasets/io-lulc/io-lulc-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@
"source": [
"### Select a region and find data items\n",
"\n",
"We'll pick an area surrounding Manila, Philippines and use the STAC API to find what data items are avaialable. We won't select a date range since this dataset contains items from a single timeframe in 2020."
"We'll pick an area surrounding Manila, Philippines and use the STAC API to find what data items are available. We won't select a date range since this dataset contains items from a single timeframe in 2020."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion datasets/jrc-gsw/jrc-gsw-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"source": [
"### Query the dataset\n",
"\n",
"JRC Global Surface Water data on the Planetary Computer is available globally. We'll pick an area with seasonal water in Bangladesh and use the STAC API to find what data items are avaialable."
"JRC Global Surface Water data on the Planetary Computer is available globally. We'll pick an area with seasonal water in Bangladesh and use the STAC API to find what data items are available."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions datasets/landsat-c2/landsat-c2-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@
"source": [
"### Available assets\n",
"\n",
"In additon to numerous metadata assets, each Electro-Optical (EO) band is a separate asset."
"In addition to numerous metadata assets, each Electro-Optical (EO) band is a separate asset."
]
},
{
Expand Down Expand Up @@ -1245,7 +1245,7 @@
"id": "4a48a225-ce61-43e2-b59d-e2f4fea62038",
"metadata": {},
"source": [
"To convert from Kelvin to degress, subtract 273.15."
"To convert from Kelvin to degrees, subtract 273.15."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions datasets/modis/modis-fire-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@
"bbox = [longitude - buffer, latitude - buffer, longitude + buffer, latitude + buffer]\n",
"items = dict()\n",
"\n",
"# Fetch the collection of interest and print availabe items\n",
"# Fetch the collection of interest and print available items\n",
"for datetime in datetimes:\n",
" print(f\"Fetching {datetime}\")\n",
" search = catalog.search(\n",
Expand Down Expand Up @@ -208,7 +208,7 @@
"id": "f1887df6-eb0f-4cc0-8a4f-c6baa9a8abd7",
"metadata": {},
"source": [
"For this example, we'll visualize the fire mask throughtout the peak of the 2021 Dixie Wildfire in California. Let's grab each fire mask cover COG and load them into an xarray using [odc-stac](https://github.com/opendatacube/odc-stac). The MODIS coordinate reference system is a [sinusoidal grid](https://modis-land.gsfc.nasa.gov/MODLAND_grid.html), which means that views in a naïve XY raster look skewed. For visualization purposes, we reproject to a [spherical Mercator projection](https://wiki.openstreetmap.org/wiki/EPSG:3857) for intuitive, north-up visualization.\n",
"For this example, we'll visualize the fire mask throughout the peak of the 2021 Dixie Wildfire in California. Let's grab each fire mask cover COG and load them into an xarray using [odc-stac](https://github.com/opendatacube/odc-stac). The MODIS coordinate reference system is a [sinusoidal grid](https://modis-land.gsfc.nasa.gov/MODLAND_grid.html), which means that views in a naïve XY raster look skewed. For visualization purposes, we reproject to a [spherical Mercator projection](https://wiki.openstreetmap.org/wiki/EPSG:3857) for intuitive, north-up visualization.\n",
"\n",
"The fire mask values are defined as:\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion datasets/modis/modis-imagery-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@
"buffer = 2\n",
"items = dict()\n",
"\n",
"# Fetch the collection of interest and print availabe items\n",
"# Fetch the collection of interest and print available items\n",
"\n",
"for datetime in datetimes:\n",
" print(f\"Fetching {datetime}\")\n",
Expand Down
2 changes: 1 addition & 1 deletion datasets/modis/modis-temperature-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@
"}\n",
"items = dict()\n",
"\n",
"# Fetch the collection of interest and print availabe items\n",
"# Fetch the collection of interest and print available items\n",
"for name, number in months.items():\n",
" datetime = f\"{year}-{number}\"\n",
" search = catalog.search(\n",
Expand Down
4 changes: 2 additions & 2 deletions datasets/ms-buildings/ms-buildings-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@
"source": [
"### Working with large files\n",
"\n",
"The full dataset is partitioned by region. To avoid very large files for regions with many buildings, regions available as [Parquet Datasets](https://arrow.apache.org/docs/python/parquet.html#partitioned-datasets-multiple-files) consiting of multiple Parquet files. You can use libraries like [Dask](https://dask.org/) or [Apache Spark](https://spark.apache.org/) to load in the partioned data. This example uses Dask to process the data in parallel."
"The full dataset is partitioned by region. To avoid very large files for regions with many buildings, regions available as [Parquet Datasets](https://arrow.apache.org/docs/python/parquet.html#partitioned-datasets-multiple-files) consisting of multiple Parquet files. You can use libraries like [Dask](https://dask.org/) or [Apache Spark](https://spark.apache.org/) to load in the partitioned data. This example uses Dask to process the data in parallel."
]
},
{
Expand Down Expand Up @@ -270,7 +270,7 @@
"id": "e4cf5732-5a54-4f61-9294-729b76542139",
"metadata": {},
"source": [
"The asset's `href` points to the root of a Parquet datset in Azure Blob Storage. We'll use dask-geopandas to load the dataset. This gives a `dask_geopandas.GeoDataFrame` with 13 partitions."
"The asset's `href` points to the root of a Parquet dataset in Azure Blob Storage. We'll use dask-geopandas to load the dataset. This gives a `dask_geopandas.GeoDataFrame` with 13 partitions."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,7 @@
"id": "53de3988-3b98-43c5-ac5c-7d90a2ad7701",
"metadata": {},
"source": [
"Or you can use `xarray.open_mfdataset` to load all the variables for an item, which will combine each of the varaibles."
"Or you can use `xarray.open_mfdataset` to load all the variables for an item, which will combine each of the variables."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion datasets/nasadem/nasadem-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@
"source": [
"### Read and plot a NASADEM tile\n",
"\n",
"We found an asset that matched our seach, so we'll open the GeoTIFF directly with xarray and downlsample the data for easier plotting. The `datashader` render can handle rendering the whole array, but the resulting image size is quite large. "
"We found an asset that matched our search, so we'll open the GeoTIFF directly with xarray and downlsample the data for easier plotting. The `datashader` render can handle rendering the whole array, but the resulting image size is quite large. "
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions datasets/noaa-nclimgrid/noaa-nclimgrid-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
"\n",
"items = dict()\n",
"\n",
"# Fetch the collection of interest and print availabe items\n",
"# Fetch the collection of interest and print available items\n",
"for datetime in datetimes:\n",
" print(f\"Fetching {datetime}\")\n",
" search = catalog.search(\n",
Expand Down Expand Up @@ -798,11 +798,11 @@
"metadata": {},
"outputs": [],
"source": [
"# Libaries for drawing maps\n",
"# Libraries for drawing maps\n",
"import cartopy.crs as ccrs\n",
"import cartopy\n",
"\n",
"# Libaries for making plots\n",
"# Libraries for making plots\n",
"import matplotlib.pyplot as plt\n",
"\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions datasets/sentinel-1-grd/sentinel-1-grd-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@
"id": "fcee1c90-0a39-4cc4-ac51-00b80bd2f3b4",
"metadata": {},
"source": [
"The item's data assets will be some combination of `vh`, `vv`, `hv`, and `hh`, depending on the polarization the signal was transmitted and received in. In this case, the item has `vv` and `vh` assets. In general, check the `sar:polarizations` field for what is avaiable."
"The item's data assets will be some combination of `vh`, `vv`, `hv`, and `hh`, depending on the polarization the signal was transmitted and received in. In this case, the item has `vv` and `vh` assets. In general, check the `sar:polarizations` field for what is available."
]
},
{
Expand Down Expand Up @@ -449,7 +449,7 @@
"source": [
"### GRD Products\n",
"\n",
"The resolution and bands available depend on the [aquisition mode](https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/acquisition-modes) and level of multi-looking.\n",
"The resolution and bands available depend on the [acquisition mode](https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/acquisition-modes) and level of multi-looking.\n",
"\n",
"* Stripmap (SM)\n",
"* Interferometric Wide Swath (IW)\n",
Expand Down
2 changes: 1 addition & 1 deletion quickstarts/leafmap-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@
"state": {
"_model_module_version": "^0.16.0",
"_view_module_version": "^0.16.0",
"attribution": "(C) OpenStreetMap contributors, vizualization CC-By-SA 2.0 Freemap.sk",
"attribution": "(C) OpenStreetMap contributors, visualization CC-By-SA 2.0 Freemap.sk",
"max_native_zoom": 18,
"max_zoom": 16,
"min_native_zoom": 0,
Expand Down
Loading

0 comments on commit 1bd0f35

Please sign in to comment.