Skip to content

Commit

Permalink
merging the new branch
Browse files Browse the repository at this point in the history
  • Loading branch information
NMoghaddam committed Sep 20, 2021
2 parents f80e458 + 469c7c0 commit 127f78b
Show file tree
Hide file tree
Showing 28 changed files with 654 additions and 16 deletions.
51 changes: 51 additions & 0 deletions .github/workflows/testing.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# This is a basic workflow to help you get started with Actions

name: Python testing

# Controls when the action will run.
on:
push:
branches:
- master
- develop
pull_request:
branches:
- master
- develop

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]

# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout
uses: actions/checkout@v2

- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

- name: Install
#
run: |
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key C99B11DEB97541F0
sudo apt-add-repository https://cli.github.com/packages
sudo apt update
sudo apt install -y --no-install-recommends libnetcdf-dev
sudo apt install -y --no-install-recommends gdal-bin libgdal-dev
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install GDAL==$(gdal-config --version) --global-option=build_ext --global-option="-I/usr/include/gdal"
- name: Run Testing
run: python -m unittest discover
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,13 @@ output/
log/
*.pyc
run_*
#PyCharm files
.idea/
testing/
*.code-workspace
.dockerignore
.vscode/
.devcontainer/
.DS_Store
*.log
input2/
10 changes: 6 additions & 4 deletions conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,9 @@

# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage', 'sphinx.ext.pngmath', 'sphinx.ext.viewcode']
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest',
'sphinx.ext.coverage', 'sphinx.ext.imgmath',
'sphinx.ext.viewcode']

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
Expand All @@ -54,16 +56,16 @@

# General information about the project.
project = u'WindMultipliers'
copyright = u'2014, Geoscience Australia'
copyright = u'2021, Geoscience Australia'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '2.0'
version = '3.0'
# The full version, including alpha/beta/rc tags.
release = '2.0'
release = '3.0'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
55 changes: 50 additions & 5 deletions docs/run_instructions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,36 +24,65 @@ the code's home directory. To install these packages with pip, use:

Input datasets
==============
The wind multipliers code requires two input datasets:
The wind multipliers code requires four input datasets:
* **Landcover classification:**
The landcover classification dataset is used to calculate the change in wind speed over varying landcover surfaces.
The input landcover classification dataset must be a classified dataset, broken into desired landcover categories, such as urban, forest,
grassland etc. The classification categories should be integer values (but this is not required). The interpretation of each landcover type is
outlined in the accompanying terrain_table.
The `National Dynamic Land Cover Dataset of Australia Version 2.0 <http://www.ga.gov.au/metadata-gateway/metadata/record/gcat_83868>`_ can be
used if a higher resolution dataset is not available.

In order to update local wind multiplier dataset in 2021, the Land Cover Classification Scheme (LCCS) which is last updated for 2015 by Digital Earth Australia
as a single-band GeoTiff with 25m spatial resolution, is collated to be used. This dataset is then overlaid with mesh block-derived dataset (2016) in the preprocessing stage
for improving the categories with "Natural surface" types.
* **Digital elevation model:**
The DEM dataset is used to calculate topography and shielding parameters.
The `1 second Shuttle Radar Topography Mission (SRTM) Smoothed Digital Elevation Models (DEM-S) Version 1.0 <http://www.ga.gov.au/metadata-gateway/metadata/record/gcat_72759>`_ is
available to use as an input.
* **Mesh blocks:**
The administrative boundaries based on PSMA Australia version 2016 is used for processing urban areas.
* **Settlement types:**
The settlement types based on 2016 census is sourced from Australian Bureau of Statistic (ABS) involved counting dwellings, place of Enumeration as well as UCL by STRD dwelling structure.
This dataset is used for processing urban areas along with mesh block input dataset in the preprocessing stage.

Both input datasets can be placed in the `input` folder within Wind_multipliers, however can be placed anywhere that can be accessed by the code.
All input datasets can be placed in the `input` folder within Wind_multipliers, however can be placed anywhere that can be accessed by the code.
The path to these datasets is set in the configuration file.
At present, both datasets need to be in `.img` format, however this will be changed in future code releases.
Previously, both landcover and DEM datasets need to be in `*.img` format, however this changed with the recent code release.

.. note:: The lowest resolution of the input Landcover and DEM datasets will determine the resolution of the calculated wind multipliers.

Configuration file
==================
Before running all_multipliers.py to produce terrain, shielding and topographic multipliers, the configuration file `multiplier_conf.cfg`, in the
code home directory, needs to be configured. The following options need to be set in the configuration file:
code home directory, needs to be configured for the preprocessing step with the following options:

* **settlement_data** the location of the settlement types dataset to be used
* **settlement_cat** the category that defines the attribute in the settlement types for merge
* **land_use_data** the location of the meshblock dataset to be used
* **land_use_cat** the category that defines the attribute in the meshblock dataset for merge
* **crop_mask** the location of the layer for cropping the outcome of preprocessing step including vector and raster layers - set the parameter to None for continental coverage
* **input_topo** In order to map to the topographic file and takes the inputValues.dem_data, needs to be set to True.
* **topo_crop** Set this option to True for cropping the outcome of preprocessing step to the AOI set in **crop_mask** otherwise, default is False.

Running `pre_process.py` in preprocessing step will produce merged mesh block with settlement types as both vector and raster (i.e. shapefile and GeoTiff) layers. The second part of preprocessing step
is to overlay merged meshblock raster layer with LCCS dataset in order to improve the categories with "Natural surface" types. For this step, the areas with "Natural surface" types in LCCS dataset
will be identified and their code will be replaced with the corresponding and approperiate code in the merged mesh block dataset using the `LCCS_meshblock_continent.py`. The output of this step
is an updated LCCS layer that is going to be used as the **terrain_data** for producing multipliers.

* **root:** the working directory of the task.
* **upwind_length:** the upwind buffer distance
* **terrain_data:** the location of the terrain dataset to be used
* **terrain_table:** the location of the csv table outlining the format of the terrain dataset to be read in
* **dem_data:** the location of the DEM dataset to be used

Assuming having the merged shapefile from the preporcess script, There is also one optional step between preprocessing and generating local wind multipliers using `rasterize.py`.
This step can be used to rasterise merged meshblock from shapefile to GeoTiff on a given topography file.

There are two required arguments with `-i` and `-t` flags for shapefile and topography inputs.Two other agruments are optional with `-a` and `-c` flags for attribute to rasterise and crop mask respectively. At the moment rasterise is based on CAT value set in the preprocess script.
If `-i` and `-t` are given, it will create the rasterised file (test.tiff) on the extend and resolution of the topography file (same projection).
By adding the `-c` option, it will generate the same test.tiff but also two other files test_crop.tiff and [your topo filename]_crop.tiff being the cropped version of the files. It works with random shapes (not necessary rectangular one).

terrain_table
-------------
The terrain table is a csv file that provides the 'key' for reading in the terrain dataset. The use of the terrain
Expand Down Expand Up @@ -94,6 +123,22 @@ CATEGORY,DESCRIPTION,ROUGHNESS_LENGTH_m,SHIELDING

Running the code
================
The script for preprocessing mesh blocks and settlement types dataset is ``pre_process.py``. This script merges settlement and land use data using a common merging attribute.

To run ``pre_process.py`` type

``python pre_process.py -c multiplier_conf.cfg``

from the code home directory.

The script for rasterizing merged mesh block dataset is ``rasterize.py``.

To run ``rasterize.py`` type

``python rasterize.py -c multiplier_conf.cfg -i <path to merged shapefile> -t <path to topography file>``

from the code directory.

The script for deriving terrain, shielding and topographic multipliers is ``all_multipliers.py``. This script links four modules: terrain, shielding,
topographic and utilities.

Expand All @@ -109,4 +154,4 @@ This software implements parallelisation using mpi4py for MPI handling. To run i

where ncpu is the number of CPUs adopted.

The results are located under output folder (created automatically during the process) under root directory.
The results are located under output folder (created automatically during the process) under root directory.
16 changes: 16 additions & 0 deletions multiplier_conf.cfg
Original file line number Diff line number Diff line change
@@ -1,3 +1,19 @@
[Preprocessing]
settlement_data = /workspaces/Wind_multipliers/testing/input/SettlementTypes_20210413.shp
settlement_cat = SETTLEMENT
land_use_data = /workspaces/Wind_multipliers/testing/input/NEXIS_INPUT_MB2016_QLD.shp
land_use_cat = MB_CAT16
#crop mask can be None
crop_mask = /workspaces/Wind_multipliers/testing/input/box_sub.shp
# Mapping into topographic file
# if True takes inputValues.dem_data, can be a filename too.
input_topo = True
topo_crop = True


output_shapefile = /workspaces/Wind_multipliers/testing/intermediate/meshblobcks.shp
output_rasterized = /workspaces/Wind_multipliers/testing/intermediate/meshblobcks.tiff

[inputValues]
root = /short/w85/nfm547/Wind_multipliers
upwind_length = 0.01
Expand Down
Loading

0 comments on commit 127f78b

Please sign in to comment.