Learn how to use <geosys/> platform capabilities in your own business workflow! Build your processor and learn how to run them on your platform.
Who we are
Table of Contents
The aim of this project is to help our customers valuing our data platform capabilities to build their own analytics.
The purpose of this example is to demonstrate how to extract pixel of interest from our EarthData Store based on a geometry and data selection criteria like sensors and band of interest, access to standard of premium cloud mask and publish results as a n-dimension object (zarr file) on cloud storage location. Extracted data can be used to support analysis and analytic creation like in the notebook showcasing how to generate a vegatation index for non cloudy dates leveraging spatial dimensions of the dataset or how to plot vegetation index evolution over time.
It highlights the ability to quickly create pixel pipeline and generate n-dimension reflectance objects in xarray format.
Use of this project requires valids credentials from the <geosys/> platform . If you need to get trial access, please register here.
To be able to run this example, you will need to have following tools installed:
-
Install Conda: please install Conda on your computer. You can download and install it by following the instructions provided on the official Conda website
-
Install Docker Desktop: please install Docker Desktop on your computer. You can download and install it by following the instructions provided on the official Docker Desktop website
-
Install Jupyter Notebook: please install jupyter Notebook on your computer following the instructions provided on the official Jupyter website
-
Install Git: please install Github on your computer. You can download and install it by visiting <a href=https://desktop.github.com/here> and following the provided instructions
This package has been tested on Python 3.10.12.
To set up the project, follow these steps:
-
Clone the project repository:
git clone https://github.com/earthdaily/reflectance-datacube-processor
-
Change the directory:
cd earthdaily-data-processor
-
Fill the environment variable (.env)
Ensure that you populate the .env file with your credentials.= To access and use our Catalog STAC named EarthDataStore, please ensure that you have the following environment variables set in your .env file:
EDS_API_URL = https://api.eds.earthdaily.com/archive/v1/stac/v1
EDS_AUTH_URL = <eds auth url>
EDS_CLIENT_ID = <your client id>
EDS_SECRET = <your secret>
You can also specify the EDS_CLIENT_ID
and EDS_SECRET
direclty on the API. Those two parameters are not mandatory in the .env file.
To publish results on cloud storage, please add your credentials allowing the processor to write outputs:
AWS_ACCESS_KEY_ID = <...>
AWS_SECRET_ACCESS_KEY = <...>
AWS_BUCKET_NAME = <...>
AZURE_ACCOUNT_NAME = <...>
AZURE_BLOB_CONTAINER_NAME = <...>
AZURE_SAS_CREDENTIAL = <...>
You can also specify the AWS_BUCKET_NAME
direclty on the API.
To set up and run the project using Docker, follow these steps:
-
Build the Docker image locally:
docker build --tag reflectancedatacubeprocessor .
-
Run the Docker container:
docker run -e RUN_MODE_ENV=API -p 8100:80 reflectancedatacubeprocessor
-
Access the API by opening a web browser and navigating to the following URL:
http://127.0.0.1:8100/docs
This URL will open the Swagger UI documentation, click on the "Try it out" button under each POST endpoint and enter the request parameters and body
Parameters:
- Cloud storage, ex: "AWS_S3"
- Collections, ex: "Venus-l2a"
- Assets, ex: "red"
- Cloud mask, ex: "native"
- Create metacube, ex: "no"
- Clear coverage (%), ex: "80"
Body Example:
{
"geometry": "POLYGON ((1.26 43.427, 1.263 43.428, 1.263 43.426, 1.26 43.426, 1.26 43.427))",
"startDate": "2019-05-01",
"endDate": "2019-05-31",
"EntityID": "entity_1"
}
To use Jupyter Notebook of the project, please follow these steps:
-
Open a terminal in the earthdaily-data-processor folder.
-
Create the required Conda environment:
conda env create -f environment.yml
-
Activate the Conda environment:
conda activate earthdaily-processor
-
Open a jupyter notebook server:
jupyter notebook --port=8080
-
Open the example notebook (datacube-sustainable-practices.ipynb) by clicking on it.
-
Run the notebook cells to execute the code example and plot results.
NB: To use the example notebooks, you need to generate the exemple datacubes. They are described in each notebooks (the parameters not mentionned need to have the default value).
├── README.md
├── notebooks
│ ├───datacube-cloud_mask.ipynb
│ ├───datacube-digital-agriculture.ipynb
│ ├───datacube-simulated-dataset.ipynb
│ └───datacube-sustainable-practices.ipynb
├── requirements.txt
├── environment.yml
│── Dockerfile
│── .env
│── LICENSE
│── VERSION
├── setup.py
├───src
│ ├───main.py
│ ├───test.py
│ ├───api
│ │ ├── files
│ │ │ └── favicon.svg
│ │ ├── __init__.py
│ │ ├── api.py
│ │ └── constants.py
│ ├───data
│ │ └── processor_input_example.json
│ ├───schemas
│ │ ├── __init__.py
│ │ ├── input_schema.py
│ │ └── output_schema.py
│ ├───utils
│ │ ├── __init__.py
│ │ ├── utils.py
│ │ └── file_utils.py
│ └───earthdaily_data_procesor
│ ├── __init__.py
│ └── processor.py
└── test_environment.py
The following links will provide access to more information:
If this project has been useful, that it helped you or your business to save precious time, don't hesitate to give it a star.
Distributed under the MIT License.
For any additonal information, please email us.
© 2023 Geosys Holdings ULC, an Antarctica Capital portfolio company | All Rights Reserved.