Skip to content

Baseline models, scripts, and helper functions for the SpaceNet 4: Off-Nadir Building Detection Challenge

License

Notifications You must be signed in to change notification settings

CosmiQ/CosmiQ_SN4_Baseline

Repository files navigation

CosmiQ_SN4_Baseline

GitHub release docker license tweet

This repository contains code to train baseline models to identify buildings in the SpaceNet 4: Off-Nadir Building Footprint Detection Challenge, and then use those models to generate predictions for test imagery in the competition format. See the DownLinQ post about the baseline for more information.

Table of Contents:


Requirements

  • Python 3.6
  • The SpaceNet 4 training and test datasets. See download instructions under "Downloads" here. These datasets are freely available from an AWS S3 bucket, and are most easily downloaded using the AWS CLI. The dataset comprises about 75 GB of gzipped tarballs of imagery data with accompanying .geojson-formatted labels. You must download and expand the tarballs prior to running any of the scripts in the bin directory or performing any model training with the cosmiq_sn4_baseline python module.
  • Several python packages (rasterio, tensorflow, keras, opencv, spacenetutilities, numpy, pandas, scikit-image). All are installed automatically during Installation using the Dockerfile or pip.

Optional dependencies:

  • NVIDIA GPUs (see notes under Dockerfile and pip in Installation)
  • Tensorflow-GPU
  • Tensorboard (for live monitoring of model training)
  • nvidia-docker 2

Repository contents

  • cosmiq_sn4_baseline directory, setup.py, and MANIFEST.in: Required components of the pip-installable cosmiq_sn4_baseline module. Components of that module:
    • models: Keras model architecture for model training.
    • utils: Utility functions for converting spacenet geotiff and geojson data to the formats required by this library.
    • DataGenerator: keras.utils.Sequence subclasses for streaming augmentation and data feeding into models.
    • losses: Custom loss functions for model training in Keras.
    • metrics: Custom metrics for use during model training in Keras.
    • callbacks: Custom callbacks for use during model training in Keras.
    • inference: Inference code for predicting building footprints using a trained Keras model.
  • Dockerfile: nvidia-docker Dockerfile to create images and containers with all requirements to use this package.
  • bin directory: Scripts for data pre-processing, model creation, model training, and running inference using cosmiq_sn4_baseline.

Installation

Docker container setup (NVIDIA GPU usage only)

Building the nvidia-docker image for this repository will install all of the package's dependencies, as well as the cosmiq_sn4_baseline python module, and will provide you with a working environment to run the data pre-processing, model training, and inference scripts (see Usage). To build the container:

  1. Clone the repository: git clone https://github.com/cosmiq_sn4_baseline.git
  2. cd CosmiQ_SN4_Baseline
  3. nvidia-docker build -t cosmiq_sn4_baseline .
  4. NV_GPU=[GPU_ID] nvidia-docker run -it --name space_base cosmiq_sn4_baseline
    Replace [GPU_ID] with the identifier for the GPU you plan to use.

Docker container install troubleshooting

Updated 12.31.2018 We have observed problems with docker installation of this repo leading to GDAL linking errors with conda-forge Rasterio 1.0.13 and GDAL 2.3.2. If you encounter problems with import rasterio commands in the codebase, We recommend updating The GDAL installation line in the Dockerfile to specify gdal==2.2.4, which serves as a bugfix until the conda-forge feedstocks are fixed.

If you do not have access to GPUs, you can still install the python codebase using pip and perform model training and inference; however, it will run much more slowly.

pip installation of the codebase

If you do not have access to GPUs, or you only wish to install the python library within an existing environment, the python package is installable via pip using two approaches:

  1. Clone the repository and install locally
  • Navigate to your desired destination directory in a shell, and run: git clone https://github.com/cosmiq/cosmiq_sn4_baseline.git
  • cd CosmiQ_SN4_Baseline
  • pip install -e .
  1. Install directly from GitHub (python module only)
  • Within a shell, run pip install -e git+git://github.com/cosmiq/cosmiq_sn4_baseline#egg=cosmiq_sn4_baseline

Usage

The python library and scripts here can be used one of two ways:

  1. Use the scripts within the bin directory for data pre-processing, model training, and inference.
  2. Write your own data processing, training, and inference code using the classes and functions in the cosmiq_sn4_baseline module.

The second case is self-explanatory, and the various functions and classes' usage is documented in the codebase.

All of the scripts in bin can be called from the command line with the format
python [script.py] [arguments]

The arguments are documented within the codebase. You can also receive a description of their usage by running
python [script.py] -h
from the command line. Their usage is also detailed below.

Command line functions

make_rgbs.py --dataset_dir (source_directory) --output_dir (destination_directory) [--verbose --overwrite]
added in 1.1.0 Convert Pan-sharpened 4-channel 16-bit images to RGB 8-bits (BGR channel order).

NOTE: IMAGERY TARBALLS MUST BE EXPANDED BEFORE CALLING THIS SCRIPT!
Arguments:

  • --dataset_dir, -d: Path to the directory containing both the training and test datasets. The structure should be thus:
dataset_dir
|
+-- SpaceNet-Off-Nadir_Train
|   +-- Unzipped imagery directories from tarballs (e.g. Atlanta_nadir29_catid_1030010003315300/)
|   +-- geojson/  # directory containing building labels
|
+-- SpaceNet-Off-Nadir_Test
    +-- Unzipped imagery directories from tarballs
  • --output_dir, -o: Path to the desired directory to save output data to. Outputs will be comprised of 8-bit BGR .tifs and binary mask .tifs for each location chip. The output structure will be thus when completed:
output_dir
|
+-- train_rgb: directory containing 8-bit BGR tiffs for each collect/chip pair from SpaceNet-Off-Nadir_Train subdirs
+-- masks: directory containing tiff binary masks for each chip from SpaceNet-Off-Nadir_Train/geojson
+-- test_rgb: directory containing 8-bit BGR tiffs for each collect/chip pair from SpaceNet-Off-Nadir_Test subdirs
  • --verbose, -v: Verbose text output while running? Defaults to False.
  • --overwrite, -w: Overwrite pre-existing images? Defaults to False, in which case it skips over any images that are already present.

make_np_arrays.py --dataset_dir (source_directory) --output_dir (destination_directory) [--verbose --create_splits --overwrite]
Convert imagery to a Keras model-usable format. NOTE: IMAGERY TARBALLS MUST BE EXPANDED BEFORE THIS SCRIPT IS CALLED!
Arguments:

  • --dataset_dir, -d: Path to the directory containing both the training and test datasets. The structure should be thus:
dataset_dir
|
+-- SpaceNet-Off-Nadir_Train
|   +-- Unzipped imagery directories from tarballs (e.g. Atlanta_nadir29_catid_1030010003315300/)
|   +-- geojson/  # directory containing building labels
|
+-- SpaceNet-Off-Nadir_Test
    +-- Unzipped imagery directories from tarballs
  • --output_dir, -o: Path to the desired directory to save output data to. Outputs will be comprised of 8-bit BGR .tifs, binary mask .tifs for each location chip, and NumPy arrays containing both of the above. The output structure will be thus when completed:
output_dir
|
+-- train_rgb: directory containing 8-bit BGR tiffs for each collect/chip pair from SpaceNet-Off-Nadir_Train subdirs
+-- masks: directory containing tiff binary masks for each chip from SpaceNet-Off-Nadir_Train/geojson
+-- test_rgb: directory containing 8-bit BGR tiffs for each collect/chip pair from SpaceNet-Off-Nadir_Test subdirs
+-- all_train_ims.npy: numpy array of training imagery
+-- all_test_ims.npy: numpy array of test imagery
+-- all_train_masks.npy: numpy array of binary building footprint masks
+-- training_chip_ids.npy: numpy array of chip IDs for each image in the training array
+-- test_chip_ids.npy: numpy array of chip IDs for each image in the test array
|  [Optional, see --create_splits flag below]
+-- nadir_train_ims.npy: numpy array of training imagery from nadir angles 7-25 only
+-- offnadir_train_ims.npy: numpy array of training imagery from nadir angles 26-40 only
+-- faroffnadir_train_ims.npy: numpy array of training imagery from nadir angles 41-53 only
+-- nadir_train_masks.npy
+-- offnadir_train_masks.npy
+-- faroffnadir_train_masks.npy
  • --verbose, -v: Verbose text output while running? Defaults to False.
  • --create_splits, -s: Make nadir, offnadir, and far-offnadir training subarrays? Defaults to False. Note that using this flag roughly doubles the disk space required for imagery numpy array storage.
  • --overwrite, -ow: Overwrite existing images? Defaults to False, in which case it will skip any image files or arrays that are already present. Note: there is a known issue with not using the -ow flag, and it must currently be used when running this script.

train_model.py --data_path (source_directory) --output_path (path to model output) [--subset ['all', 'nadir', 'offnadir', or 'faroffnadir'] --seed (integer) --model ['ternausnetv1' or 'unet', see docstring] --tensorboard_dir (path to desired tensorboard log output dir)]

Train a model on the data. Note: Source imagery must be generated using make_np_arrays.py prior to use.
Arguments:

  • --data_path, -d: Path to the source dataset files. This corresponds to --output_dir from make_np_arrays.py if using --data_format array (the default), or make_rgbs output/train_rgb if using --data_format files. See the docstring for more detauls.
  • New --mask_path, -m: Path to the directory containing masks. If not passed, it's assumed that you're using arrays from make_np_arrays.py, in which case the script will use --data_path here as well. This must be pointed to make_rgbs.py output_dir/masks if --data_format is files.
  • New --data_format, -f: What type of data should be read into the model? Can be either array (the default) or files. If array, then make_np_arrays.py must have been run first to generate the array data. If files, either make_np_arrays.py or make_rgbs.py must have been run.
  • New --recursive, -r: Should subdirectories of --data_path be recursively traversed to search for images? Defaults to False. If used, then all .tif files in any directory under --data_path must be an rgb .tif for use in training.
  • --output_path, -o: Path to save the trained model to. Should end in '.h5' or '.hdf5'.
  • --subset, -s: Train on all of the data, or just a subset? Defaults to 'all'. Other options are 'nadir', 'offnadir', or 'faroffnadir'. To use the subset options, the imagery subsets must have been produced using the --create_splits flag in make_np_arrays.py.
  • --seed, -e: Seed for random number generation in NumPy and TensorFlow. Alters initialization parameters for each model layer. Defaults to 42.
  • --model, -m: Options are 'ternausnetv1' (default) and 'unet'. See cosmiq_sn4_baseline.models for details on model architecture.
  • --tensorboard_dir, -t: Destination directory for tensorboard log writing. Optional, only required if you want to use tensorboard to visualize training.

make_predictions.py --model_path (path) --test_data_path (path to test_data/) --output_dir (desired directory for outputs) [--verbose --angle_set ('all', 'nadir', 'offnadir', or 'faroffnadir') --angle (integer angles) --n_chips (integer number of unique chips to predict for each angle) --randomize_chips (randomize the order of chips before subsetting) --footprint_threshold (integer minimum number of pixels for a footprint to be kept) --window_step (integer number of pixels to step in x,y directions during inference)] make_predictions.py runs inference on the test image set. It scans across the X,Y axes of each images in step size (--window_step, defaults to 64), producing overlapping predictions, and then averages the predictions. This helps reduce edge effects. Arguments:

  • --model_path, -m: path to the .hdf5 or .h5 file saved by train_model.py.
  • --test_data_path, -t: Path to the test_data directory produced by make_np_arrays.py.
  • --output_dir, -o: path to the desired output directory to save data to. Defaults to test_output in the current working directory. The directory structure will be thus:
output_dir
|
+-- output_geojson: directory containing geojson-formatted footprints for visual inspection (i.e. with QGIS)
|   +-- subdir for each nadir angle
|
+-- predictions.csv: SpaceNet challenge-formatted .csv for passing to competition evaluation
  • --verbose, -v: produce verbose text output? Defaults to False.
  • --angle_set, -as: Should inference be run on imagery from all angles, or only a subset? Options are ['all', 'nadir', 'offnadir', and 'faroffnadir']. Not to be used at the same time as --angle.
  • --angle, -a: Specific angle[s] to produce predictions for. Not to be used at the same time as --angle_set.
  • --n_chips, -n: Number of unique chips to run evaluation on. Defaults to all.
  • --randomize_chips, -r: Randomize order of chips prior to subsetting? Defaults to False. Only has an effect if using --n_chips.
  • --footprint_threshold, -ft: Minimum footprint size to be kept as a building prediction. Defaults to 0.
  • --window_step, -ws: Step size for sliding window during inference. Defaults to 64.
  • --simplification_threshold, -s: Threshold, in meters, for simplifying polygon vertices. Defaults to 0 (no simplification). Failure to simplify can result in very large .csv outputs (>500 MB), which are too big to use in submission for the competition.

Good luck in SpaceNet 4: Off-Nadir Buildings!

About

Baseline models, scripts, and helper functions for the SpaceNet 4: Off-Nadir Building Detection Challenge

Resources

License

Stars

Watchers

Forks

Packages

No packages published