OpenShift deployment example with nested OpenShiftStack, VPCStack, tabs, keys and values
+ + +### Step 2: Prepare your workspace + +**Important:** This and subsequent steps shall be executed on a host having the OpenShift client tools and able to reach your OpenShift cluster nodes - conveniently, this can be one of the *openshift-master* servers. + +> If using MiniShift, continue using your terminal. + +* SSH into your selected host and ensure you are logged in to OpenShift. If you used Step 1 to deploy OpenShift, the requested server URL is the same as the OpenShift console URL, the username is `admin` and the password is as specified in the CloudFormation template. Otherwise use the values specific to your environment. + +``` +## On an openshift-master server +oc whoami +# if not logged in yet +oc login +``` + +* The Solace OpenShift QuickStart project contains useful scripts to help you prepare an OpenShift project for event broker deployment. Retrieve the project in your selected host: + +``` +mkdir ~/workspace +cd ~/workspace +git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git +cd solace-openshift-quickstart +``` + +### Step 3: (Optional: only execute for Deployment option 1) Install the Helm v2 client and server-side tools + +This will deploy Helm in a dedicated "tiller-project" project. Do not use this project for your deployments. + +- First download the Helm v2 client. If using Windows, get the [Helm executable](https://storage.googleapis.com/kubernetes-helm/helm-v2.16.0-windows-amd64.zip ) and put it in a directory on your path. +```bash + # Download Helm v2 client, latest version if needed + curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash +``` + +- Use script to install the Helm v2 client and its Tiller server-side operator. +```bash + # Setup local Helm client + helm init --client-only + # Install Tiller server-side operator into a new "tiller-project" + oc new-project tiller-project + oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller-project" -p HELM_VERSION=v2.16.0 | oc create -f - + oc rollout status deployment tiller + # also let Helm know where Tiller was deployed + export TILLER_NAMESPACE=tiller-project +``` + +### Step 4: Create a new OpenShift project to host the event broker deployment + +This will create a new project for deployments if needed or you can use your existing project except "helm" (the "helm" project has special privileges assigned which shall not be used for deployments). +``` +oc new-project solace-pubsub # adjust your project name as needed here and in subsequent commands +``` + +### Step 5: Optional: Load the event broker (Docker image) to your Docker Registry + +Deployment scripts will pull the Solace PubSub+ image from a [Docker registry](https://docs.Docker.com/registry/ ). There are several [options which registry to use](https://docs.openshift.com/container-platform/3.11/architecture/infrastructure_components/image_registry.html#overview ) depending on the requirements of your project, see some examples in (Part II) of this step. + +**Hint:** You may skip the rest of this step if using the free PubSub+ Standard Edition available from the [Solace public Docker Hub registry](https://hub.Docker.com/r/solace/solace-pubsub-standard/tags/ ). The Docker Registry URL to use will be `solace/solace-pubsub-standard:OpenShift deployment example with nested OpenShiftStack, VPCStack, tabs, keys and values
- +### 2. Install and configure Helm -### Step 2: Prepare your workspace +Note that Helm is transitioning from v2 to v3. Many deployments still use v2. PubSub+ can be deployed using either version, however concurrent use of v2 and v3 from the same command-line environment is not supported. Also note that there is a known [issue with using Helm v3 with OpenShift objects](https://bugzilla.redhat.com/show_bug.cgi?id=1773682) and until resolved Helm v2 is recommended. -**Important:** This and subsequent steps shall be executed on a host having the OpenShift client tools and able to reach your OpenShift cluster nodes - conveniently, this can be one of the *openshift-master* servers. - -> If using MiniShift, continue using your terminal. - -* SSH into your selected host and ensure you are logged in to OpenShift. If you used Step 1 to deploy OpenShift, the requested server URL is the same as the OpenShift console URL, the username is `admin` and the password is as specified in the CloudFormation template. Otherwise use the values specific to your environment. ++- First download the Helm v2 client. If using Windows, get the [Helm executable](https://storage.googleapis.com/kubernetes-helm/helm-v2.16.0-windows-amd64.zip ) and put it in a directory on your path. +```bash + # Download Helm v2 client, latest version if needed + curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash ``` -## On an openshift-master server -oc whoami -# if not logged in yet -oc login -``` - -* The Solace OpenShift QuickStart project contains useful scripts to help you prepare an OpenShift project for message broker deployment. Retrieve the project in your selected host: +- Use script to install the Helm v2 client and its Tiller server-side operator. This will deploy Tiller in a dedicated project. Do not use this project for your deployments. +```bash + # Setup local Helm client + helm init --client-only + # Install Tiller server-side operator into a new "tiller-project" + oc new-project tiller-project + oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller-project" -p HELM_VERSION=v2.16.0 | oc create -f - + oc rollout status deployment tiller + # also let Helm know where Tiller was deployed + export TILLER_NAMESPACE=tiller-project ``` -mkdir ~/workspace -cd ~/workspace -git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git -cd solace-openshift-quickstart -``` - -### Step 3: (Optional: only execute for Deployment option 1 - use the Solace Kubernetes QuickStart to deploy the message broker) Install the Helm client and server-side tools -* **(Part I)** Use the ‘deployHelm.sh’ script to deploy the Helm client and server-side components. Begin by installing the Helm client tool: +
++- Use the [instructions from Helm](//github.com/helm/helm#install) or if using Linux simply run: +```bash + curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash ``` -cd ~/workspace/solace-openshift-quickstart/scripts -./deployHelm.sh client -# Copy and run the export statuments from the script output! -``` +
+
+- **Important**: For each new project using Helm v2, grant admin access to the server-side Tiller service from the "tiller-project" and set the TILLER_NAMESPACE environment.
+```bash
+ oc policy add-role-to-user admin "system:serviceaccount:tiller-project:tiller"
+ # if not already exported, ensure Helm knows where Tiller was deployed
+ export TILLER_NAMESPACE=tiller-project
```
-cd ~/workspace
-git clone https://github.com/SolaceProducts/solace-kubernetes-quickstart.git
-cd solace-kubernetes-quickstart
-```
-
-* Update the Solace Kubernetes Helm chart values.yaml configuration file for your target deployment with the help of the Kubernetes quick start `configure.sh` script. (Please refer to the [Solace Kubernetes QuickStart](https://github.com/SolaceProducts/solace-kubernetes-quickstart#step-4 ) for further details):
+> Ensure each command-line session has the TILLER_NAMESPACE environment variable properly set!
-Notes:
-
-* Providing `-i SOLACE_IMAGE_URL` is optional (see [Step 5](#step-5-load-the-message-broker-Docker-image-to-your-Docker-registry ) if using the latest Solace PubSub+ Standard edition message broker image from the Solace public Docker Hub registry
-* Set the cloud provider option to `-c aws` when deploying a message broker in an OpenShift / AWS environment
-* Ensure Helm runs by executing `helm version`. If not, revisit [Step 3](#step-3-optional-only-for-deployment-option-1---use-the-solace-kubernetes-quickstart-to-deploy-the-message-broker-install-the-helm-client-and-server-side-tools ), including the export statements.
-
-HA deployment example:
+- Use one of the chart variants to create a deployment. For configuration options and delete instructions, refer to the [PubSub+ Software Event Broker Helm Chart documentation](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg/pubsubplus).
+a) Create a Solace PubSub+ minimum deployment for development purposes using `pubsubplus-dev`. It requires a minimum of 1 CPU and 2 GB of memory be available to the PubSub+ pod.
+```bash
+ # Deploy PubSub+ Standard edition, minimum footprint developer version
+ helm install --name my-release solacecharts/pubsubplus-dev \
+ --set securityContext.enabled=false
```
-oc project solace-pubsub # adjust your project name as needed
-cd ~/workspace/solace-kubernetes-quickstart/solace
-../scripts/configure.sh -p
-The template by default provides for a small-footprint Solace message broker deployment deployable in MiniShift. Adjust `export system_scaling_maxconnectioncount` in the template for higher scaling but ensure adequate resources are available to the pod(s). Refer to the [System Requirements in the Solace documentation](//docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Scaling-Tier-Resources.htm).
+- Use one of the chart variants to create a deployment. For configuration options and delete instructions, refer to the [PubSub+ Software Event Broker Helm Chart documentation](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg/pubsubplus).
-Also note that if a deployment failed and then deleted using `oc delete -f`, ensure to delete any remaining PVCs. Failing to do so and retrying using the same deployment name will result in an already used PV volume mounted and the pod(s) may not come up.
-
-The template by default provides for a small-footprint Solace message broker deployment deployable in MiniShift. Adjust `export system_scaling_maxconnectioncount` in the template for higher scaling but ensure adequate resources are available to the pod(s). Refer to the [System Requirements in the Solace documentation](//docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Scaling-Tier-Resources.htm).
-
-* For a **Single-Node** configuration:
- * Process the Solace 'Single Node' OpenShift template to deploy the message broker in a single-node configuration. Specify values for the DOCKER_REGISTRY_URL, MESSAGEBROKER_IMAGE_TAG, MESSAGEBROKER_STORAGE_SIZE, and MESSAGEBROKER_ADMIN_PASSWORD parameters:
-```
-oc project solace-pubsub # adjust your project name as needed
-cd ~/workspace/solace-openshift-quickstart/templates
-oc process -f messagebroker_singlenode_template.yaml DEPLOYMENT_NAME=test-singlenode DOCKER_REGISTRY_URL=