Skip to content

Commit

Permalink
[WFLY-19359] define kafka yaml file for OpenShift and update the README.
Browse files Browse the repository at this point in the history
  • Loading branch information
kstekovi committed Jun 12, 2024
1 parent 01313ed commit 781a68f
Show file tree
Hide file tree
Showing 4 changed files with 64 additions and 126 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -29,81 +29,7 @@ function installPrerequisites()
application="${1}"
echo "Creating amq-streams-operator-group"

oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: amq-streams-operator-group
namespace: default
spec: {}
EOF

echo "Creating amq-streams-subscription"
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: amq-streams-subscription
namespace: default
spec:
channel: stable
installPlanApproval: Automatic
name: amq-streams
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: amqstreams.v2.5.0-0
EOF


seconds=120
now=$(date +%s)
end=$(($seconds + $now))

echo "Looping for 2 minutes until the 'kafka' CRD is available "
while [ $now -lt $end ]; do
# It takes a while for the kafka CRD to be ready
sleep 5
echo "Trying to create my-cluster"
oc apply -f - <<EOF
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
EOF
if [ "$?" = "0" ]; then
break
fi
now=$(date +%s)
done

echo "Creating testing topic"
oc apply -f - <<EOF
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: testing
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
EOF
oc apply -f ./charts/kafka-on-openshift.yaml --wait --timeout=10m0s

# Wait for the pods to come up
seconds=900
Expand Down Expand Up @@ -135,10 +61,6 @@ EOF
# 1 - application name
function cleanPrerequisites()
{
# TODO There are a few topics created that need cleaning up

oc delete kafka my-cluster
oc delete subscription amq-streams-subscription
oc delete operatorgroup amq-streams-operator-group
oc delete deployment amq-streams-cluster-operator-v2.5.0-1
echo "Deleting all AMQ streams resources"
oc delete -f ./charts/kafka-on-openshift.yaml --wait --timeout=10m0s
}
3 changes: 3 additions & 0 deletions microprofile-reactive-messaging-kafka/README-source.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -987,6 +987,9 @@ bin/kafka-topics.sh --create --topic testing --bootstrap-server localhost:9092

// OpenShift
include::../shared-doc/build-and-run-the-quickstart-with-openshift.adoc[leveloffset=+1]
----
$ oc delete -f ./charts/kafka-on-openshift.yaml --wait --timeout=10m0s
----

== Conclusion

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# This is the YAML needed to install Kafka provided by Strimzi on OpenShift.


---
# install the Red Hat Streams for Apache Kafka operator
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
generation: 1
name: amq-streams
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Automatic
name: amq-streams
source: redhat-operators
sourceNamespace: openshift-marketplace

---
# create a Kafka Stream instance
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}

---
# create a topic in Kafka Stream instance
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: testing
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
Original file line number Diff line number Diff line change
Expand Up @@ -4,56 +4,17 @@ The functionality of this quickstart depends on a running instance of the
https://access.redhat.com/products/red-hat-amq#streams[AMQ Streams] Operator. AMQ Streams is a Red Hat project based on Apache Kafka. To deploy AMQ Streams in the Openshift environment:

. Log in into the Openshift console as `kubeadmin` user (or any cluster administrator).
. Navigate to `Operators` -> `OperatorHub`.
. Search for `AMQ Streams` - click on the 'AMQ Streams' operator.
+
Install it with the default values and wait for the message telling you it has been installed and is ready for use.
. In your terminal, run the following command to set up a Kafka cluster called `my-cluster` in your project:
+
[options="nowrap",subs="+attributes"]
----
$ oc apply -f - <<EOF
apiVersion: kafka.strimzi.io/{strimzi-version}
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
EOF
----
. Install the `Red Hat Streams for Apache Kafka` operator
. Create an instance of `Red Hat Streams for Apache Kafka`
. Create a topic in the `Red Hat Streams for Apache Kafka`

NOTE: If you see errors along the lines of _no matches for kind "Kafka" in version "kafka.strimzi.io/{strimzi-version}"_, execute the command `oc get crd kafkas.kafka.strimzi.io -o jsonpath="{.spec.versions[*].name}"` and update `apiVersion` to the returned version.
Install it with the default values and wait for the message telling you it has been installed and is ready for use.

In your terminal, run the following command to set up a Kafka cluster called `my-cluster` in your project:

. Next set up a topic called `testing` in the `my-cluster` cluster we created:
+
[options="nowrap",subs="+attributes"]
----
oc apply -f - <<EOF
apiVersion: kafka.strimzi.io/{strimzi-version}
kind: KafkaTopic
metadata:
name: testing
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
EOF
$ oc apply -f ./charts/kafka-on-openshift.yaml --wait --timeout=10m0s
----

Although the above commands will return pretty immediately, your AMQ Streams instance will not be available until its entity operator is up and running. The name of the pod will be of the format `my-cluster-entity-operator-xxxxxxxxx-yyyyy`.
Expand Down

0 comments on commit 781a68f

Please sign in to comment.