Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migrated example #312

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
122 changes: 122 additions & 0 deletions c7-c8-multi-engine/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Multi Engine Adaptation C7 - C8

![Multi Engine](./images/multi-engine.png)

## Project Overview

The project entails a partial transition to Camunda 8, offering substantial advantages. It involves adopting a Multi-Engine architecture where both Camunda 7 (c7) and Camunda 8 (c8) coexist and intercommunicate through APIs.

**Process Diagram**:
![Diagram](./images/hybrid-process.png)

### Key Objectives

- **Enhanced Flexibility:** Utilizing the features of Camunda 8 to introduce greater flexibility into the system.

- **Partial Migration:** Implementing a phased migration approach to gradually transition to Camunda 8 while preserving existing functionality.

- **Prioritized Implementation:** Strategically prioritizing the migration based on feature availability and specific requirements.

### End-to-End Visibility with Optimize

The Ingest API in Camunda Optimize (C7 Feature only) allows to:

- Enables a unified view of processes and activities across the organization.
- Supports cross-platform analytics and visualization.
- Facilitates process optimization and data-driven decision-making.

Via the Ingestion API we can send Camunda 8 Events to Optimize.

#### Implementation in a gRPC Client Interceptor

Camunda 8 Events are send to Optimize via a Interceptor. Following steps are happening:

1. **Initialize gRPC Client Interceptor:**
[Set up a gRPC client interceptor](./src/main/java/org/camunda/consulting/example/interceptor/InterceptorConfiguration.java) for custom logic in outgoing calls.

2. **Capture Job Information:**
Capture relevant job data when a [job is activated](./src/main/java/org/camunda/consulting/example/interceptor/OptimizeEventInterceptor.java) in a job worker.

3. **Prepare Data for Ingestion:**
Aggregate essential data (e.g., job ID, process instance ID) for ingestion into Camunda Optimize. As a traceId, a variable with the name correlationKey is used. The correlationKey maps to the business key in Camunda 7.

4. **Ingest Data into Camunda Optimize:**
Utilize the Ingest API endpoints to send a POST request with the prepared data.

These steps facilitate efficient integration of the Ingest API, enabling seamless transmission and analysis of job-related data in Camunda Optimize.

#### See Events in Optimize

You need to enable Event Based Processes in Optimize:

```
engines:
'camunda-bpm':
eventImportEnabled: true

eventBasedProcess:
authorizedUserIds: ['demo']
authorizedGroupIds: []
eventImport:
enabled: true
```

Now the ingested events are visible in Optimize

1. Create BPMN Diagram

2. Map Events

![Map Events](./images/map-events-optimize.PNG)

3. Create Reports

![Create Reports](./images/combined-view-optimize.PNG)

#### Limitations

- Only ActivateJob Events will be send to Optimize (No BpmnError, No Engine Events (Timer, Message received, ...))
- ActivatedJobs will be reported to Optimize, even if the completion fails


## What you need to run the project

- Camunda License Key (including Optimize) (update in [license.txt](./docker/license.txt))
- Camunda EE Images (You can also use Camunda Run CE. Optimize requires an Enterprise License)

```
docker login registry.camunda.cloud
Username: my.username
Password: my.pw

Login Succeeded
docker pull registry.camunda.cloud/cambpm-ee/camunda-bpm-platform-ee:run-7.19.6
docker pull registry.camunda.cloud/optimize-ee/optimize:3.10.5
```

```
docker compose up
```

```
mvn spring-boot:run
```



## Start an Instance

```
curl --location 'http://localhost:8080/engine-rest/process-definition/key/C7_First/start' \
--header 'Content-Type: application/json' \
--data '{
"variables": {
"payload": {
"value": "Hello",
"type": "String"
}

},
"businessKey": "1235555x"
}'
```
113 changes: 113 additions & 0 deletions c7-c8-multi-engine/docker/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
services:
camunda-run:
image: registry.camunda.cloud/cambpm-ee/camunda-bpm-platform-ee:run-7.19.6
container_name: camunda-run
ports:
- 8080:8080
restart: unless-stopped
networks:
- camunda-platform
volumes:
- type: bind
source: ./license.txt
target: /camunda/.camunda/license.txt
zeebe: # https://docs.camunda.io/docs/self-managed/platform-deployment/docker/#zeebe
image: camunda/zeebe:8.2.15
container_name: zeebe
ports:
- "26500:26500"
- "9600:9600"
environment: # https://docs.camunda.io/docs/self-managed/zeebe-deployment/configuration/environment-variables/
- ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_CLASSNAME=io.camunda.zeebe.exporter.ElasticsearchExporter
- ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_URL=http://elasticsearch:9200
# default is 1000, see here: https://github.com/camunda/zeebe/blob/main/exporters/elasticsearch-exporter/src/main/java/io/camunda/zeebe/exporter/ElasticsearchExporterConfiguration.java#L259
- ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_BULK_SIZE=1
# allow running with low disk space
- ZEEBE_BROKER_DATA_DISKUSAGECOMMANDWATERMARK=0.998
- ZEEBE_BROKER_DATA_DISKUSAGEREPLICATIONWATERMARK=0.999
- "JAVA_TOOL_OPTIONS=-Xms512m -Xmx512m"
restart: always
healthcheck:
test: [ "CMD-SHELL", "timeout 10s bash -c ':> /dev/tcp/127.0.0.1/9600' || exit 1" ]
interval: 30s
timeout: 5s
retries: 5
start_period: 30s
volumes:
- zeebe:/usr/local/zeebe/data
networks:
- camunda-platform
depends_on:
- elasticsearch

operate: # https://docs.camunda.io/docs/self-managed/platform-deployment/docker/#operate
image: camunda/operate:8.2.15
container_name: operate
ports:
- "8081:8080"
environment: # https://docs.camunda.io/docs/self-managed/operate-deployment/configuration/
- CAMUNDA_OPERATE_ZEEBE_GATEWAYADDRESS=zeebe:26500
- CAMUNDA_OPERATE_ELASTICSEARCH_URL=http://elasticsearch:9200
- CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_URL=http://elasticsearch:9200
- management.endpoints.web.exposure.include=health
- management.endpoint.health.probes.enabled=true
healthcheck:
test: [ "CMD-SHELL", "curl -f http://localhost:8080/actuator/health/readiness" ]
interval: 30s
timeout: 1s
retries: 5
start_period: 30s
networks:
- camunda-platform
depends_on:
- zeebe
- elasticsearch
optimize:
image: registry.camunda.cloud/optimize-ee/optimize:3.10.5
container_name: optimize
ports:
- 8090:8090
- 8091:8091
networks:
- camunda-platform
depends_on:
elasticsearch:
condition: service_healthy
volumes:
- type: bind
source: ./environment-config.yaml
target: /optimize/config/environment-config.yaml
- type: bind
source: ./license.txt
target: /optimize/config/OptimizeLicense.txt

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION:-7.17.5}
container_name: elasticsearch
ports:
- 9200:9200
environment:
- bootstrap.memory_lock=true
- discovery.type=single-node
- xpack.security.enabled=false
- cluster.routing.allocation.disk.threshold_enabled=false
ulimits:
memlock:
soft: -1
hard: -1
restart: always
healthcheck:
test: [ "CMD-SHELL", "curl -f http://localhost:9200/_cat/health | grep -q green" ]
interval: 30s
timeout: 5s
retries: 3
networks:
- camunda-platform
volumes:
- elastic:/var/lib/docker/elasticsearch/data
volumes:
zeebe:
elastic:

networks:
camunda-platform:
Loading