Geokodikas can be run using Docker.
-
geokodikas/db-production This container contains a PostGIS server with the correct extensions installed.
docker run -p 5432:5432 -e POSTGRES_PASSWORD='geokodikas' -e POSTGRES_USER='geokodikas' -e POSTGRES_DB='geokodikas' geokodikas/db-production:master
-
geokodikas/geokodikas This is the main Docker container which contains the geokodikas HTTP API. You can start this container by first creating a
config.json
file:{ "importer": { "output_threshold": 10000, "max_queue_size": 1000000, "num_processors": 8, "processor_block_size": 10000 }, "database": { "username": "geokodikas", "password": "geokodikas", "db_name": "geokodikas", "host": "localhost", "port": "5432" }, "import_from_export": { "file_location": "", "file_md5sum": "", "try_import_on_http": true }, "http": { "public_url": "http://localhost:8080" } }
The
file_location
andfile_md5sum
have to be configured using information available from https://github.com/geokodikas/exports. Then start the container with (for demonstration purposes we use--net=host
):docker run --net=host -v $PWD/config.json:/opt/geokodikas/config.json -p 8080:8080 geokodikas/geokodikas:master
If the db doesn't contain an import, this container downloads the
file_location
file and import it, after which it exists. You can run the container again with the same command, this time the HTTP API stats. The API can be reached athttp://localhost:8080
. -
geokodikas/osm2pgsql Contains the osm2pgsql tool used by the import pipeline.
A more robust setup can be achieved using docker-compose. An example docker-compose file is available in the docker-compose directory.
git clone https://github.com/geokodikas/docker
cd docker/docker-compose
docker-compose up -d db-production
docker-compose up geokodikas # import the configured export
docker-compose up geokodikas # really start the HTTP API
This directory contains a config.json
file which imports Belgium into the database. After the import you should restart the geokodikas container again.
The nomad/
directory contains some example configuration for Nomad.
The geokodikas/geokodikas
docker container reads the config.json
filled by Nomad.
This file may contain the following:
"import_from_export": {
"file_location": "https://example.com/full_importbelgium.osm.pbf_5b2197033cc053c537957d72faa2fbf8__nvaymwo4",
"file_md5sum": "0d782ac1a1dea4d4ae8663ca7ea28d37",
"try_import_on_http": true
}
When starting, geokodikas checks in the import_from_export_metadata
table whether the correct import is available in the DB.
If there is no import available, geokodikas downloads the configured file and start importing it in the database using pg_restore
.
After updating the metadata table, the process exits. Nomad then restart the container, thereafter the container sees that the import is already available
and thus starts the HTTP API.
In the Nomad configuration file, the database to import can be configured using the [meta keys]{https://github.com/geokodikas/docker/blob/master/nomad/geokodikas.nomad#L54}. After changing these parameters, you can Nomad to run the new job using:
nomad run job geokodikas.nomad
The import will first be performed in a canary allocation.
If you run a new plan, Nomad creates one new allocation with one canary.
Fabio makes this instance available under the /canary
URL.
After testing whether this works, you can promote this canary version:
nomad job promote geokodikas
Be careful with canary updates with count=2, it seems that some downtime is possible then.