Dataflows processors to work with CKAN.
dump_to_ckan
processor
The package use semantic versioning. It means that major versions could include breaking changes. It's recommended to specify package
version range in your setup/requirements
file e.g. package>=1.0,<2.0
.
$ pip install dataflows-ckan
These processors have to be used as a part of data flow. For example:
flow = Flow(
load('data/data.csv'),
dump_to_ckan(
host,
api_key,
owner_org,
overwrite_existing_data=True,
push_to_datastore=False,
push_to_datastore_method='insert',
**options,
),
)
flow.process()
Saves the DataPackage to a CKAN instance.
Create a virtual environment and install Poetry.
Then install the package in editable mode:
$ make install
Run the tests:
$ make test
Format your code:
$ make format
- Full port to dataflows, and some refactoring, with a basic integration test.
- an initial port from https://github.com/frictionlessdata/datapackage-pipelines-ckan based on the great work of @brew and @amercader