diff --git a/README.md b/README.md index 489100c..ad16085 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ [![Coverage Status](https://coveralls.io/repos/github/c-martinez/FAIRDataPoint/badge.svg?branch=dev)](https://coveralls.io/github/c-martinez/FAIRDataPoint?branch=dev) -### FAIR Data Point (FDP) +# FAIR Data Point (FDP) Python implementation of FAIR Data Point. @@ -11,27 +11,26 @@ FDP is a RESTful web service that enables data owners to describe and to expose *FDP->catalogs->datasets->distributions* -FDP software specification can be found [here](https://dtl-fair.atlassian.net/wiki/spaces/FDP/pages/6127622/FAIR+Data+Point+Software+Specification). +FDP software specification can be found [here](https://github.com/FAIRDataTeam/FAIRDataPoint-Spec/blob/master/spec.md) FDP has been implemented in: -* [Python](https://github.com/NLeSC/ODEX-FAIRDataPoint/) +* [Python](https://github.com/NLeSC/FAIRDataPoint/) * [Java](https://github.com/DTL-FAIRData/FAIRDataPoint) ## Installation ------------- -To install fdp, do: +To install FDP, do ```bash -git clone https://github.com/NLeSC/ODEX-FAIRDataPoint.git -cd ODEX-FAIRDataPoint +git clone https://github.com/NLeSC/fairdatapoint.git +cd fairdatapoint pip install . ``` -TODO: register on pypi and change this to `pip install fairdatapoint` +TODO: update it when there is a release on pypi ## Running ```bash -fdp-run samples/plant_breeding_group.ttl +fdp-run localhost 8080 ``` Then visit from your browser: http://localhost:8080/ @@ -40,104 +39,86 @@ Then visit from your browser: http://localhost:8080/ Run tests (including coverage) with: ```bash -python setup.py test +pip install .[tests] +pytest ``` TODO: Include a link to your project's full documentation here. -## Contributing - -If you want to contribute to the development of FAIR Data Point, -have a look at the [contribution guidelines](CONTRIBUTING.rst). - -## License - -Copyright (c) 2019, - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ## Deploy with Docker -TODO: update docker deployment - -`docker run -p 8080:8080 -d nlesc/odex-fairdatapoint` - -## Deploy without Docker -TODO: update this section - -Clone this repo. +Download the `docker-compose.prod.yml` from this repo, change the `HOSTNAME` in the file to a proper host, and then run the command ``` -git clone https://github.com/NLeSC/ODEX-FAIRDataPoint.git -cd ODEX-FAIRDataPoint/fdp-api/python +docker-compose -f docker-compose.prod.yml up -d ``` -Install FDP into ENV. +## Deploy without Docker + +Before deploying FDP, it's necessary to first have a running SPARQL database. ``` -python -m venv ENV -source /ENV/bin/activate +git clone https://github.com/NLeSC/fairdatapoint.git +cd fairdatapoint +pip install . -make install -# make clean # removes files from doc dir (except swagger.json) +# fdp-run --db= +fdp example.com 8080 --db='http://dbpedia.org/sparql' ``` -Edit metadata in `config.ini`. +## Web API documentation -``` -# in development -make serve-dev # with default HOST=127.0.0.1:8080 -make test -# in production -make -e serve-prod HOST=example.com -``` +FAIR Data Point (FDP) exposes the following endpoints (URL paths): -**Web API documentation** +| Endpoint | GET | POST | DELETE | +|--------------|:--------------:|:-----------------:|:--------------:| +| fdp | Output metadata triples | Remove existing triples for a specific ID, then create new triples with the request data | Not Allowed | +| catalog/ | Output all IDs | Remove existing triples for a specific ID, then create new triples with the request data | Remove all IDs | +| dataset/ | Output all IDs | Remove existing triples for a specific ID, then create new triples with the request data | Remove all IDs | +| distribution/ | Output all IDs | Remove existing triples for a specific ID, then create new triples with the request data | Remove all IDs | +| catalog/\ | Output metadata triples | Not Allowed | Remove the specific ID | +| dataset/\ | Output metadata triples | Not Allowed | Remove the specific ID | +| distribution/\ | Output metadata triples | Not Allowed | Remove the specific ID | -Base URL: `http://127.0.0.1:8080` -**Access endpoints to request metadata programmatically** +### Access endpoints to request metadata programmatically FDP: `curl -iH 'Accept: text/turtle' [BASE URL]/fdp` -Catalog: `curl -iH 'Accept: text/turtle' [BASE URL]/catalog/catalog-01` +Catalog: `curl -iH 'Accept: text/turtle' [BASE URL]/catalog/catalog01` -Dataset: `curl -iH 'Accept: text/turtle' [BASE URL]/dataset/breedb` +Dataset: `curl -iH 'Accept: text/turtle' [BASE URL]/dataset/dataset01` -Distribution: `curl -iH 'Accept: text/turtle' [BASE URL]/distribution/breedb-sparql` +Distribution: `curl -iH 'Accept: text/turtle' [BASE URL]/distribution/dist01` -Note: FDP supports the following RDF serializations (MIME-types): +### FDP supports the following RDF serializations (MIME-types): * Turtle: `text/turtle` * N-Triples: `application/n-triples` +* N3: `text/n3` * RDF/XML: `application/rdf+xml` * JSON-LD: `application/ld+json` -## FAIR Data Point specification +## Contributing + +If you want to contribute to the development of FAIR Data Point, +have a look at the [contribution guidelines](CONTRIBUTING.rst). + +## License -FAIR Data Point (FDP) exposes the following endpoints (URL paths): +Copyright (c) 2019, + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 -| Endpoints | Description | -| -- | -- | -| [ /, /doc, /doc/ ] | Redirects to the API documentation | -| /fdp | Returns FDP metadata | -| /catalog/{catalogID} | Returns catalog metadata (default: catalog-01) | -| /dataset/{datasetID} | Returns dataset metadata (default: breedb) | -| /distribution/{distributionID} | Returns distribution metadata (default: breedb-sparql) | - -This services makes use of: - - [Data Catalog Vocabulary](http://www.w3.org/TR/vocab-dcat/) - - [Dublin Core Metadata Terms](http://dublincore.org/documents/dcmi-terms/) - - [DBpedia](http://dbpedia.org/resource/) +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. ## Credits diff --git a/docker-compose.prod.yml b/docker-compose.prod.yml new file mode 100644 index 0000000..2315b6c --- /dev/null +++ b/docker-compose.prod.yml @@ -0,0 +1,17 @@ +version: '3' +services: + fdp: + build: . + image: "" + command: fdp-run HOSTNAME 8080 --db http://db:8890/sparql + ports: + - "8080:8080" + depends_on: + - db + db: + image: "tenforce/virtuoso" + ports: + - "8890:8890" + - "1111:1111" + environment: + SPARQL_UPDATE: "true" \ No newline at end of file diff --git a/setup.cfg b/setup.cfg index 8768be4..66ad7e8 100644 --- a/setup.cfg +++ b/setup.cfg @@ -1,10 +1,6 @@ [metadata] description-file = README.rst -[aliases] -# Define `python setup.py test` -test=pytest - [coverage:run] branch = True source = fdp