Please read the review process doc before sending your first PR.
The following steps are used to generate the openmrs docker image bundled with modules required for this project
-
Setup latest OpenMrs distro locally:
mvn openmrs-sdk:setup -DserverId=openmrs-analytics
-
Generate docker image setup using openmrs SDK, this step requires you to provide openmrs-distro.properties file path of your openmrs instance.
mvn openmrs-sdk:build-distro -Ddistro=/{path}/openmrs-distro.properties`
A folder named
docker
is generated which contains a Dockerfile and other setup files. Example of docker file generated by openmrs-sdk. -
Build the docker image:
cd openmrs-analytics/web docker build -t openmrs/openmrs-reference-application-distro:analytics .
-
Push this image to docker hub
docker push openmrs/openmrs-reference-application-distro:analytics
When working on a feature in the Debezium based pipeline, you can replay old
binlog updates by setting the system properties database.offsetStorage
and
database.databaseHistory
to an old version of the history. In other words, you
can start from history/offset pair A, add some events in OpenMRS (e.g., add an
Observation) to create some updates in the binlog. Now if you keep a copy of
original version of A (before the changes) you can reuse it in future runs to
replay the new events with no more interactions with OpenMRS.
When a Pull Request is submitted, it triggers a request to Google-owned project
running Cloud Build to run the build config
file defined in cloudbuild.yaml
. Any time a new commit
is submitted for the PR, a new Cloud Build run is triggered. You can see the
status of each run by scrolling to the bottom of the PR to see the status of
each build (pass/running/fail).
To view the logs from one of the builds, click on the Details
link. This will
take you to a summary page for the build. On this page, you will see the status
of the build, with an associated Build ID. For example:
Build 60a1260b-280a-4390-ad34-b1c1d7a4efb2 successful
To view the logs of this build, substitute the build id in place of BUILD_ID
in the URL below.
https://storage.googleapis.com/cloud-build-gh-logs/log-BUILD_ID.txt
Copy the URL into your browser, and you should see the logs from your build.
In the example above, the URL for the logs for that build would be:
https://storage.googleapis.com/cloud-build-gh-logs/log-60a1260b-280a-4390-ad34-b1c1d7a4efb2.txt
NOTE: Clicking on the
View more details on Google Cloud Build
link will redirect you to a Google Cloud page which you do not have permissions to view.
The CI pipeline can also be run locally, using the Cloud Build local builder. To setup the local builder, follow the instructions here; you will need Docker, and the Google Cloud SDK to install the local builder. Once the local builder is installed, do the following:
-
Stop any running
openmrs
,openmrs-fhir-mysql
, andsink-server
containers by running:docker stop sink-server openmrs openmrs-fhir-mysql
-
Run the e2e-test using the local builder:
cloud-build-local --dryrun=false .
The end-to-end tests have been ordered based on their dependencies of the previous steps, which makes certain steps to be run concurrently. The order of the steps executed is shown in the below picture.
The flowchart has been developed using the https://app.diagrams.net/ tool and the editable version of it is available at this location
Parquet tools library is used to inspect parquet files. The jar file has been
included in the project under /e2e-tests/parquet-tools-1.11.1.jar
To regenerate this jar file:
-
Clone parquet-mr
-
Checkout last released version
git checkout apache-parquet-1.11.1
-
Install thrift compiler v0.12.0
-
To build the jar file run
mvn -pl parquet-tools -am clean install -Plocal -DskipTests
You should be able to seeparquet-tools-1.11.1.jar
inside parquet-tools module. -
Command usage for parquet tools
java -jar ./parquet-tools-<VERSION>.jar <command> my_parquet_file
NOTE: Parquet tools will be replaced with parquet cli in the next release
apache-parquet-1.12.0