Skip to content

Processor showcasing how to extract data from EarthData Store with premium cloud mask and publish a datacube on cloud storage

License

Notifications You must be signed in to change notification settings

vincentlel/reflectance-datacube-processor

 
 

Repository files navigation


Logo

Reflectance Datacube Processor

Learn how to use <geosys/> platform capabilities in your own business workflow! Build your processor and learn how to run them on your platform.
Who we are

Report Bug · Request Feature

LinkedIn Twitter Youtube Issues MIT License

Table of Contents

About The Project

The aim of this project is to help our customers valuing our data platform capabilities to build their own analytics.

The purpose of this example is to demonstrate how to extract pixel of interest from our EarthData Store based on a geometry and data selection criteria like sensors and band of interest, access to standard of premium cloud mask and publish results as a n-dimension object (zarr file) on cloud storage location. Extracted data can be used to support analysis and analytic creation like in the notebook showcasing how to generate a vegatation index for non cloudy dates leveraging spatial dimensions of the dataset or how to plot vegetation index evolution over time.

It highlights the ability to quickly create pixel pipeline and generate n-dimension reflectance objects in xarray format.

(back to top)

Getting Started

Prerequisite

Use of this project requires valids credentials from the <geosys/> platform . If you need to get trial access, please register here.

To be able to run this example, you will need to have following tools installed:

  1. Install Conda: please install Conda on your computer. You can download and install it by following the instructions provided on the official Conda website

  2. Install Docker Desktop: please install Docker Desktop on your computer. You can download and install it by following the instructions provided on the official Docker Desktop website

  3. Install Jupyter Notebook: please install jupyter Notebook on your computer following the instructions provided on the official Jupyter website

  4. Install Git: please install Github on your computer. You can download and install it by visiting <a href=https://desktop.github.com/here> and following the provided instructions

This package has been tested on Python 3.10.12.

(back to top)

Installation

To set up the project, follow these steps:

  1. Clone the project repository:

    git clone https://github.com/earthdaily/reflectance-datacube-processor
    
  2. Change the directory:

    cd earthdaily-data-processor
    
  3. Fill the environment variable (.env)

Ensure that you populate the .env file with your credentials.= To access and use our Catalog STAC named EarthDataStore, please ensure that you have the following environment variables set in your .env file:

EDS_API_URL = https://api.eds.earthdaily.com/archive/v1/stac/v1
EDS_AUTH_URL = <eds auth url>
EDS_CLIENT_ID =  <your client id>
EDS_SECRET = <your secret>

You can also specify the EDS_CLIENT_ID and EDS_SECRET direclty on the API. Those two parameters are not mandatory in the .env file.

To publish results on cloud storage, please add your credentials allowing the processor to write outputs:

AWS_ACCESS_KEY_ID = <...>
AWS_SECRET_ACCESS_KEY = <...>
AWS_BUCKET_NAME = <...>

AZURE_ACCOUNT_NAME = <...>
AZURE_BLOB_CONTAINER_NAME = <...>
AZURE_SAS_CREDENTIAL = <...>

You can also specify the AWS_BUCKET_NAME direclty on the API.

(back to top)

Usage

Run the processor in a Docker container

To set up and run the project using Docker, follow these steps:

  1. Build the Docker image locally:

    docker build --tag reflectancedatacubeprocessor .
    
  2. Run the Docker container:

    docker run -e RUN_MODE_ENV=API -p 8100:80 reflectancedatacubeprocessor
    
  3. Access the API by opening a web browser and navigating to the following URL:

    http://127.0.0.1:8100/docs
    

This URL will open the Swagger UI documentation, click on the "Try it out" button under each POST endpoint and enter the request parameters and body

POST /earthdaily-data-processor

Parameters:

  • Cloud storage, ex: "AWS_S3"
  • Collections, ex: "Venus-l2a"
  • Assets, ex: "red"
  • Cloud mask, ex: "native"
  • Create metacube, ex: "no"
  • Clear coverage (%), ex: "80"

Body Example:

{
  "geometry": "POLYGON ((1.26 43.427, 1.263 43.428, 1.263 43.426, 1.26 43.426, 1.26 43.427))",
  "startDate": "2019-05-01",
  "endDate": "2019-05-31",
  "EntityID": "entity_1"
}

Leverage datacube to generate analytics within a Jupyter Notebook

To use Jupyter Notebook of the project, please follow these steps:

  1. Open a terminal in the earthdaily-data-processor folder.

  2. Create the required Conda environment:

    conda env create -f environment.yml
    
  3. Activate the Conda environment:

    conda activate earthdaily-processor
    
  4. Open a jupyter notebook server:

    jupyter notebook --port=8080
    
  5. Open the example notebook (datacube-sustainable-practices.ipynb) by clicking on it.

  6. Run the notebook cells to execute the code example and plot results.

NB: To use the example notebooks, you need to generate the exemple datacubes. They are described in each notebooks (the parameters not mentionned need to have the default value).

(back to top)

Project Organization

├── README.md         
├── notebooks    
│   ├───datacube-cloud_mask.ipynb 
│   ├───datacube-digital-agriculture.ipynb 
│   ├───datacube-simulated-dataset.ipynb 
│   └───datacube-sustainable-practices.ipynb 
├── requirements.txt    
├── environment.yml   
│── Dockerfile
│── .env
│── LICENSE
│── VERSION
├── setup.py         
├───src                
│   ├───main.py 
│   ├───test.py 
│   ├───api
│   │   ├── files
│   │   │   └── favicon.svg
│   │   ├── __init__.py
│   │   ├── api.py
│   │   └── constants.py
│   ├───data
│   │   └── processor_input_example.json
│   ├───schemas
│   │   ├── __init__.py
│   │   ├── input_schema.py
│   │   └── output_schema.py
│   ├───utils
│   │   ├── __init__.py 
│   │   ├── utils.py
│   │   └── file_utils.py
│   └───earthdaily_data_procesor
│       ├── __init__.py
│       └── processor.py
└── test_environment.py         

(back to top)

Resources

The following links will provide access to more information:

(back to top)

Support development

If this project has been useful, that it helped you or your business to save precious time, don't hesitate to give it a star.

(back to top)

License

Distributed under the MIT License.

(back to top)

Contact

For any additonal information, please email us.

(back to top)

Copyrights

© 2023 Geosys Holdings ULC, an Antarctica Capital portfolio company | All Rights Reserved.

(back to top)

About

Processor showcasing how to extract data from EarthData Store with premium cloud mask and publish a datacube on cloud storage

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Jupyter Notebook 95.9%
  • Python 4.0%
  • Other 0.1%