Skip to content

Commit

Permalink
Red Hat publish (#47)
Browse files Browse the repository at this point in the history
• Updated quick start with Installing from the Developer perspective option
• General documentation improvements
• Fixes in OpenShift template installation
• Updated automated testing using OpenShift 3.9
  • Loading branch information
bczoma authored Feb 7, 2022
1 parent 3841396 commit 8c0f355
Show file tree
Hide file tree
Showing 9 changed files with 134 additions and 73 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ jobs:
run: |
shopt -s expand_aliases; alias remote_command=${REMOTE_COMMAND}
remote_command "/opt/scripts/helmInstallBroker helmtest"
remote_command "/opt/scripts/testBroker helmtest my-release-pubsubplus-dev" | tee out.txt
remote_command "/opt/scripts/testBroker helmtest my-release-pubsubplus-openshift-dev" | tee out.txt
grep "aurelia" out.txt # web portal access
grep "<redundancy-status>Up</redundancy-status>" out.txt
grep "<oper-status>Up</oper-status>" out.txt
Expand Down
91 changes: 56 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,23 @@ Solace [PubSub+ Platform](https://solace.com/products/platform/) is a complete e

## Overview

This project is a best practice template intended for development and demo purposes. It has been tested using OpenShift v4.6. The tested and recommended Solace PubSub+ Software Event Broker version is 9.10.
This project is a best practice template intended for development and demo purposes. It has been tested using OpenShift v4.9. The tested and recommended Solace PubSub+ Software Event Broker version is 9.12.

This document provides a quick getting started guide to install a Solace PubSub+ Software Event Broker in various configurations onto an OpenShift 4 platform. For OpenShift 3.11, refer to the [archived version of this quick start](https://github.com/SolaceProducts/pubsubplus-openshift-quickstart/tree/v1.1.1).

For detailed instructions, see [Deploying a Solace PubSub+ Software Event Broker onto an OpenShift 4 platform](/docs/PubSubPlusOpenShiftDeployment.md). There is also a general quick start for [Solace PubSub+ on Kubernetes](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md) available, which the OpenShift deployment builds upon.

The PubSub+ deployment does not require any special OpenShift Security Context; the default `restricted` SCC can be used.

We recommend using the Helm tool for convenience. An alternative method [using OpenShift templates](/docs/PubSubPlusOpenShiftDeployment.md#step-4-option-2-deploy-using-openshift-templates) is also available.
We recommend using the PubSub+ Helm chart for convenience. An alternative method [using OpenShift templates](/docs/PubSubPlusOpenShiftDeployment.md#step-4-option-2-deploy-using-openshift-templates) is also available.

## Pre-requisite: Access to OpenShift Platform

There are [multiple ways](https://www.openshift.com/try ) to get to an OpenShift 4 platform:
- The detailed [Event Broker on OpenShift](/docs/PubSubPlusOpenShiftDeployment.md#step-1-optional--aws-deploy-a-self-managed-openshift-container-platform-onto-aws) documentation describes how to set up production-ready Red Hat OpenShift Container Platform platform on AWS.
- An option for developers is to locally deploy an all-in-one environment using [CodeReady Containers](https://developers.redhat.com/products/codeready-containers/overview).
- An easy way to get an OpenShift cluster up and running is through the [Developer Sandbox](https://developers.redhat.com/developer-sandbox) program. You can sign up for a free 14-day trial.


## Deploying PubSub+ Software Event Broker

Expand All @@ -24,18 +32,30 @@ The event broker can be deployed in either a three-node High-Availability (HA) g
In this quick start we go through the steps to set up an event broker using [Solace PubSub+ Helm charts](https://artifacthub.io/packages/search?page=1&repo=solace).

There are three Helm chart variants available with default small-size configurations:
- `pubsubplus-dev`deploys a minimum footprint software event broker for developers (standalone)
- `pubsubplus`deploys a standalone software event broker that supports 100 connections
- `pubsubplus-ha`deploys three software event brokers in an HA group that supports 100 connections
- `pubsubplus-openshift-dev` - deploys a minimum footprint software event broker for developers (standalone)
- `pubsubplus-openshift` - deploys a standalone software event broker that supports 100 connections
- `pubsubplus-openshift-ha` - deploys three software event brokers in an HA group that supports 100 connections

For other event broker configurations or sizes, refer to the [PubSub+ Software Event Broker Helm Chart](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/pubsubplus/README.md) documentation.

### Step 1: Get an OpenShift Environment
You can install Helm charts on an OpenShift Container Platform cluster using the following methods:
* The Developer perspective of the OpenShift Web Console; or
* The CLI

There are [multiple ways](https://www.openshift.com/try ) to get to an OpenShift 4 platform:
- The detailed [Event Broker on OpenShift](/docs/PubSubPlusOpenShiftDeployment.md#step-1-optional--aws-deploy-a-self-managed-openshift-container-platform-onto-aws) documentation describes how to set up production-ready Red Hat OpenShift Container Platform platform on AWS.
- An option for developers is to locally deploy an all-in-one environment using [CodeReady Containers](https://developers.redhat.com/products/codeready-containers/overview).
- An easy way to get an OpenShift cluster up and running is through the [Developer Sandbox](https://developers.redhat.com/developer-sandbox) program. You can sign up for a free 14-day trial.
## Option 1: Installing from the OpenShift Web Console, Developer perspective

This simple method uses the OpenShift Web Console graphical interface:

* In a browser open the OpenShift Web Console, Developer perspective.
* Find and select the required PubSub+ Helm chart variant from the catalog, then click on "Install".
* Provide a unique Release Name. It is recommended to change the name that is offered by default. The maximum length of the Release Name should be 28 characters.
* If required, provide additional chart configurations. For options, consult the README link at the top of the page. Note that currently the "Form view" offers all the possible fields and the "YAML view" shows only those that have a current configuration value. It may be necessary to refresh the browser to display the latest in "YAML view".

Additional information is available from the [OpenShift documentation](https://docs.openshift.com/container-platform/latest/applications/working_with_helm_charts/configuring-custom-helm-chart-repositories.html#odc-installing-helm-charts-using-Developer perspective_configuring-custom-helm-chart-repositories).

## Option 2: Installing from CLI

### Step 1: Ensure command-line console access to your OpenShift environment

Assuming you have access to an OpenShift 4 platform, log in as `kubeadmin` using the `oc login -u kubeadmin` command.

Expand All @@ -60,55 +80,56 @@ Helm is configured properly if the `helm version` command returns no error.

1. Add the Solace Helm charts to your local Helm repo:
```bash
helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts
helm repo add openshift-helm-charts https://charts.openshift.io/
```

2. Create a new project or switch to your existing project (do not use the `default` project as its loose permissions don't reflect a typical OpenShift environment)
```bash
oc new-project solace-pubsub
oc new-project solace-pubsubplus
```
By default the latest public [Docker image](https://hub.Docker.com/r/solace/solace-pubsub-standard/tags/) of PubSub+ Standard Edition available from the DockerHub registry is used. To use a different image, add the following values (comma-separated) to the `--set` commands in Step 3 below:
By default the latest [Red Hat certified image](https://catalog.redhat.com/software/container-stacks/search?q=solace) of PubSub+ Standard Edition available from `registry.connect.redhat.com` is used. To use a different image, add the following values (comma-separated) to the `--set` commands in Step 3 below:
```bash
image.repository=<your-image-location>,image.tag=<your-image-tag>
helm install ... --set image.repository=<your-image-location>,image.tag=<your-image-tag>
```
If it is required by the image repository, you can also add optionally add the following:
If it is required by the image repository, you can also add the following:
```bash
image.pullSecretName=<your-image-repo-pull-secret>
--set image.pullSecretName=<your-image-repo-pull-secret>
```
3. Use one of the following Helm chart variants to create a deployment (for configuration options and deletion instructions, refer to the [PubSub+ Software Event Broker Helm Chart](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/master/pubsubplus#configuration) documentation):
- Create a Solace PubSub+ minimum deployment for development purposes using `pubsubplus-dev`. This variant requires a minimum of 1 CPU and 3.4 GiB of memory to be available to the PubSub+ event broker pod.
```bash
# Deploy PubSub+ Standard edition, minimum footprint developer version
helm install my-release solacecharts/pubsubplus-dev \
--set securityContext.enabled=false
```
- Create a Solace PubSub+ minimum deployment for development purposes using `pubsubplus-openshift-dev`. This variant requires a minimum of 1 CPU and 3.4 GiB of memory to be available to the PubSub+ event broker pod.
```bash
# Deploy PubSub+ Standard edition, minimum footprint developer version
helm install my-release openshift-helm-charts/pubsubplus-openshift-dev
```
- Create a Solace PubSub+ standalone deployment that supports 100 connections using `pubsubplus`. A minimum of 2 CPUs and 3.4 GiB of memory must be available to the PubSub+ pod.
```bash
# Deploy PubSub+ Standard edition, standalone
helm install my-release solacecharts/pubsubplus \
--set securityContext.enabled=false
```
- Create a Solace PubSub+ standalone deployment that supports 100 connections using `pubsubplus-openshift`. A minimum of 2 CPUs and 3.4 GiB of memory must be available to the PubSub+ pod.
```bash
# Deploy PubSub+ Standard edition, standalone
helm install my-release openshift-helm-charts/pubsubplus-openshift
```
- Create a Solace PubSub+ HA deployment that supports 100 connections using `pubsubplus-ha`. This deployment requires that at least 2 CPUs and 3.4 GiB of memory are available to *each* of the three event broker pods.
```bash
# Deploy PubSub+ Standard edition, HA
helm install my-release solacecharts/pubsubplus-ha \
--set securityContext.enabled=false
```
- Create a Solace PubSub+ HA deployment that supports 100 connections using `pubsubplus-openshift-ha`. This deployment requires that at least 2 CPUs and 3.4 GiB of memory are available to *each* of the three event broker pods.
```bash
# Deploy PubSub+ Standard edition, HA
helm install openshift-helm-charts/pubsubplus-openshift-ha
```
All of the Helm options above start the deployment and write related information and notes to the console.
Broker services are exposed by default through a Load Balancer that is specific to your OpenShift platform. For details, see the `Services access` section of the notes written to the console.
> Note: the `pubsubplus-openshift` Helm charts differ from the general `pubsubplus` charts in that the `securityContext.enabled` Helm parameter value is `false` by default, which is required for OpenShift.
4. Wait for the deployment to complete, following any instructions that are written to the console. You can now [validate the deployment and try the management and messaging services](/docs/PubSubPlusOpenShiftDeployment.md#validating-the-deployment).
> **Note**: There is no external Load Balancer support with CodeReady Containers. Services are accessed through NodePorts instead. Check the results of the `oc get svc my-release-pubsubplus` command. This command returns the ephemeral NodePort port numbers for each message router service. Use these port numbers together with CodeReady Containers' public IP addresses, which can be obtained by running the `crc ip` command.
> Note: There is no external Load Balancer support with CodeReady Containers. Services are accessed through NodePorts instead. Check the results of the `oc get svc my-release-pubsubplus` command. This command returns the ephemeral NodePort port numbers for each message router service. Use these port numbers together with CodeReady Containers' public IP addresses, which can be obtained by running the `crc ip` command.

## Troubleshooting

If you have any problems, refer to the [Troubleshooting](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#troubleshooting) section of the general PubSub+ Kubernetes Documentation for help. Substitute any `kubectl` commands with `oc` commands.

Expand Down
41 changes: 40 additions & 1 deletion ci/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,42 @@
The scripts in this folder are used for CI, based on [RedHat CRC](https://developers.redhat.com/products/codeready-containers/overview)

Assumptions: VM in GCP with "vagrant" user, CRC installed, /opt directory with `passw` and `pullsecret` contents. /opt/scripts and /opt/templates directory exist (no content needed).
Assumptions: VM `openshift4x-test` in GCP with `vagrant` user, `crc` installed, `/opt` directory exists with file `pullsecret` that contains valid contents. `/opt/scripts` and `/opt/templates` directories exist (no content needed). Generally, `crc` works in this container.

# How to verify the VM is in good standing

Perform following:

* Ensure the `openshift4x-test` VM in GCP is in running state.
* Login as `gcloud beta compute ssh --zone "us-east4-a" "vagrant@openshift4x-test" --project "capable-stream-180018"`. This will login as user `vagrant`.
* Run following scripts - none of then shall fail
```
cd /opt
./scripts/shutdownCrc
./scripts/startCrc
./scripts/helmInstallBroker test1
./scripts/templateDeleteBroker test1
./scripts/shutdownCrc
```
* If all is well the stop the VM. Automated tests will quit if the VM is already running, assuming it is used for other purposes.

# How to upgrade the test container to latest CRC version

If `crc` requires update for a later OpenShift version:

* Login as user `vagrant` to the running VM as above
* Follow section 2.4. Upgrading CodeReady Containers from https://access.redhat.com/documentation/en-us/red_hat_codeready_containers, Getting Started Guide. Untar then overwrite the existing `crc` command at `/usr/local/bin/crc`
* Upgrade the `/usr/local/bin/oc` command similarly if required.
* Run
```
crc version
crc stop
crc setup
```
* Fix any issues
* Proceed to verify the VM as in the previous section

# Restore from disaster

A machine image has been saved at https://console.cloud.google.com/compute/machineImages/details/openshift4x-test-backup?authuser=0&project=capable-stream-180018.

Note the machine type must support virtualization: `n1-standard-8`
7 changes: 4 additions & 3 deletions ci/helmInstallBroker
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ sudo sed -i 's/nameserver.*$/nameserver 1.1.1.1/g' /etc/resolv.conf
while ! oc login -u kubeadmin -p `cat /opt/passw` https://api.crc.testing:6443 ; do sleep 1 ; done
oc new-project $1
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts
helm install my-release solacecharts/pubsubplus-dev --set solace.redundancy=true,securityContext.enabled=false,solace.usernameAdminPassword=admin
while ! oc get pods --show-labels | grep my-release-pubsubplus-dev | grep "active=true" ; do sleep 1; done
helm repo add solacecharts-openshift-dev https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts-openshift
helm repo update
helm install my-release solacecharts-openshift-dev/pubsubplus-openshift-dev --set solace.redundancy=true,solace.usernameAdminPassword=admin
while ! oc get pods --show-labels | grep my-release-pubsubplus-openshift-dev | grep "active=true" ; do sleep 1; done
oc get pods --show-labels
oc get svc
5 changes: 4 additions & 1 deletion ci/startCrc
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
loginctl enable-linger $USER
export XDG_RUNTIME_DIR=/run/user/$(id -u)
cd /opt
sudo systemctl stop systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online
sudo systemctl disable systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online
sudo /etc/init.d/network-manager start
crc setup
crc start -p /opt/pullsecret -c 7 -m 26700 --nameserver 1.1.1.1
crc start -p /opt/pullsecret -c 7 -m 26700 --nameserver 1.1.1.1 | tee out
cat out | grep kubeadmin -A 1 | grep Password | cut -d ":" -f 2 | xargs > /opt/passw
eval $(crc oc-env)
while ! oc login -u kubeadmin -p `cat /opt/passw` https://api.crc.testing:6443 ; do sleep 1 ; done
2 changes: 1 addition & 1 deletion ci/testBroker
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ eval $(crc oc-env)
sudo sed -i 's/nameserver.*$/nameserver 1.1.1.1/g' /etc/resolv.conf
while ! oc login -u kubeadmin -p `cat /opt/passw` https://api.crc.testing:6443 ; do sleep 1 ; done
oc project $1
export IP=`oc get svc $2 -o yaml | grep clusterIP | awk -F': ' '{print $NF}'`
export IP=`oc get svc $2 -o yaml | grep 'clusterIP:' | awk -F': ' '{print $NF}'`
oc run nginx --image nginx
while ! oc get po | grep nginx | grep Running ; do sleep 1; done
oc exec -it nginx -- curl $IP:8080 | grep aurelia
Expand Down
Loading

0 comments on commit 8c0f355

Please sign in to comment.