diff --git a/docs/PubSubPlusOpenShiftDeployment.md b/docs/PubSubPlusOpenShiftDeployment.md new file mode 100644 index 0000000..daaf6e0 --- /dev/null +++ b/docs/PubSubPlusOpenShiftDeployment.md @@ -0,0 +1,595 @@ +# Deploying a Solace PubSub+ Software Event Broker onto an OpenShift 3.11 platform + +This is detailed documentation of deploying Solace PubSub+ Software Event Broker onto an OpenShift 3.11 platform including steps to set up a Red Hat OpenShift Container Platform platform on AWS. +* For a hands-on quick start using an existing OpenShift platform, refer to the [Quick Start guide](/README.md). +* For considerations about deploying in a general Kubernetes environment, refer to the [Solace PubSub+ on Kubernetes Documentation](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md) +* For the `pubsubplus` Helm chart configuration options, refer to the [PubSub+ Software Event Broker Helm Chart Reference](/pubsubplus/README.md). + + + +Contents: + * [Purpose of this Repository](#purpose-of-this-repository) + * [Description of the Solace PubSub+ Software Event Broker](#description-of-solace-pubsub-software-event-broker) + * [Production Deployment Architecture](#production-deployment-architecture) + * [Deployment Options](#deployment-options) + - [Option 1, using Helm](#option-1-using-helm) + - [Option 2, using OpenShift templates](#option-2-using-openshift-templates) + * [How to deploy Solace PubSub+ onto OpenShift / AWS](#how-to-deploy-solace-pubsub-onto-openshift--aws) + + [Step 1: (Optional / AWS) Deploy OpenShift Container Platform onto AWS using the RedHat OpenShift AWS QuickStart Project](#step-1-optional--aws-deploy-openshift-container-platform-onto-aws-using-the-redhat-openshift-aws-quickstart-project) + + [Step 2: Prepare your workspace](#step-2-prepare-your-workspace) + + [Step 3: (Optional: only execute for Deployment option 1) Install the Helm v2 client and server-side tools](#step-3-optional-only-execute-for-deployment-option-1-install-the-helm-v2-client-and-server-side-tools) + + [Step 4: Create a new OpenShift project to host the event broker deployment](#step-4-create-a-new-openshift-project-to-host-the-event-broker-deployment) + + [Step 5: Optional: Load the event broker (Docker image) to your Docker Registry](#step-5-optional-load-the-event-broker-docker-image-to-your-docker-registry) + + [Step 6-Option 1: Deploy the event broker using Helm](#step-6-option-1-deploy-the-event-broker-using-helm) + + [Step 6-Option 2: Deploy the event broker using the OpenShift templates included in this project](#step-6-option-2-deploy-the-event-broker-using-the-openshift-templates-included-in-this-project) + * [Validating the Deployment](#validating-the-deployment) + + [Viewing Bringup Logs](#viewing-bringup-logs) + * [Gaining Admin and SSH access to the event broker](#gaining-admin-and-ssh-access-to-the-event-broker) + * [Testing data access to the event broker](#testing-data-access-to-the-event-broker) + * [Deleting a deployment](#deleting-a-deployment) + + [Deleting the PubSub+ deployment](#deleting-the-pubsub-deployment) + + [Deleting the AWS OpenShift Container Platform deployment](#deleting-the-aws-openshift-container-platform-deployment) + * [Special topics](#special-topics) + + [Using NFS for persistent storage](#using-nfs-for-persistent-storage) + * [Resources](#resources) + + +## Purpose of this Repository + +This repository provides an example of how to deploy the Solace PubSub+ Software Event Broker onto an OpenShift 3.11 platform. There are [multiple ways](https://docs.openshift.com/index.html ) to get to an OpenShift platform, including [MiniShift](https://github.com/minishift/minishift#welcome-to-minishift ). This guide will specifically use the Red Hat OpenShift Container Platform for deploying an HA group but concepts are transferable to other compatible platforms. There will be also hints on how to set up a simple single-node MiniKube deployment using MiniShift for development, testing or proof of concept purposes. + +The supported Solace PubSub+ Software Event Broker version is 9.4 or later. + +For the Red Hat OpenShift Container Platform, we utilize the [RedHat OpenShift on AWS QuickStart](https://aws.amazon.com/quickstart/architecture/openshift/ ) project to deploy a Red Hat OpenShift Container Platform on AWS in a highly redundant configuration, spanning 3 zones. + +This repository expands on the [Solace Kubernetes Quickstart](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/README.md ) to provide an example of how to deploy Solace PubSub+ in an HA configuration on the OpenShift Container Platform running in AWS. + +The event broker deployment does not require any special OpenShift Security Context, the [default "restricted" SCC](//docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) can be used. + +## Description of Solace PubSub+ Software Event Broker + +Solace PubSub+ Software Event Broker meets the needs of big data, cloud migration, and Internet-of-Things initiatives, and enables microservices and event-driven architecture. Capabilities include topic-based publish/subscribe, request/reply, message queues/queueing, and data streaming for IoT devices and mobile/web apps. The event broker supports open APIs and standard protocols including AMQP, JMS, MQTT, REST, and WebSocket. As well, it can be deployed in on-premise datacenters, natively within private and public clouds, and across complex hybrid cloud environments. + +## Production Deployment Architecture + +The following diagram shows an example HA deployment in AWS: +![alt text](/docs/images/network_diagram.jpg "Network Diagram") + +
+Key parts are the three PubSub+ Container instances in OpenShift pods, deployed on OpenShift (worker) nodes; the cloud load balancer exposing the event router's services and management interface; the OpenShift master nodes(s); and the Ansible Config Server, which acts as a bastion host for external ssh access. + +## Deployment Options + +#### Option 1, using Helm + +This option allows great flexibility using the Kubernetes `Helm` tool to automate the process of event broker deployment through a wide range of configuration options including in-service rolling upgrade of the event broker. The [Solace Kubernetes QuickStart project](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg ) will be referred to deploy the event broker onto your OpenShift environment. + +#### Option 2, using OpenShift templates + +This option can be used directly, without any additional tool to deploy the event broker in a limited number of configurations, using OpenShift templates included in this project. + + +## How to deploy Solace PubSub+ onto OpenShift / AWS + +The following steps describe how to deploy an event broker onto an OpenShift environment. Optional steps are provided about setting up a Red Hat OpenShift Container Platform on Amazon AWS infrastructure (marked as Optional / AWS) and if you use AWS Elastic Container Registry to host the Solace PubSub+ Docker image (marked as Optional / ECR). + +**Hint:** You may skip Step 1 if you already have your own OpenShift environment available. + +> Note: If using MiniShift follow the [instructions to get to a working MiniShift deployment](https://docs.okd.io/latest/minishift/getting-started/index.html ). If using MiniShift in a Windows environment one easy way to follow the shell scripts in the subsequent steps of this guide is to use [Git BASH for Windows](https://gitforwindows.org/ ) and ensure any script files are using Unix style line endings by running the `dos2unix` tool if needed. + +### Step 1: (Optional / AWS) Deploy OpenShift Container Platform onto AWS using the RedHat OpenShift AWS QuickStart Project + +* (Part I) Log into the AWS Web Console and run the [OpenShift AWS QuickStart project](https://aws.amazon.com/quickstart/architecture/openshift/ ), which will use AWS CloudFormation for the deployment. We recommend you deploy OpenShift across 3 AWS Availability Zones for maximum redundancy. Please refer to the RedHat OpenShift AWS QuickStart guide and supporting documentation: + + * [Deploying and Managing OpenShift on Amazon Web Services](https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_amazon_web_services/ ) + + **Important:** As described in above documentation, this deployment requires a Red Hat account with a valid Red Hat subscription to OpenShift and will consume 10 OpenShift entitlements in a maximum redundancy configuration. When no longer needed ensure to follow the steps in the [Deleting the OpenShift Container Platform deployment](#deleting-the-openshift-container-platform-deployment ) section of this guide to free up the entitlements. + + This deployment will create 10 EC2 instances: an *ansible-configserver* and three of each *openshift-etcd*, *openshift-master* and *openshift-nodes* servers.
+ + **Note:** only the "*ansible-configserver*" is exposed externally in a public subnet. To access the other servers that are in a private subnet, first [SSH into](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html ) the *ansible-configserver* instance then use that instance as a bastion host to SSH into the target server using it's private IP. For that we recommend enabling [SSH agent forwarding](https://developer.github.com/v3/guides/using-ssh-agent-forwarding/ ) on your local machine to avoid the insecure option of copying and storing private keys remotely on the *ansible-configserver*. + +* (Part II) Once you have deployed OpenShift using the AWS QuickStart you will have to perform additional steps to re-configure OpenShift to integrate fully with AWS. For full details, please refer to the RedHat OpenShift documentation for configuring OpenShift for AWS: + + * [OpenShift > Configuring for AWS](https://docs.openshift.com/container-platform/3.10/install_config/configuring_aws.html ) + + To help with that this quick start provides a script to automate the execution of the required steps: + + * Add the required AWS IAM policies to the ‘Setup Role’ (IAM) used by the RedHat QuickStart to deploy OpenShift to AWS + * Tag public subnets so when creating a public service suitable public subnets can be found + * Re-configure OpenShift Masters and OpenShift Nodes to make OpenShift aware of AWS deployment specifics + + SSH into the *ansible-configserver* then follow the commands. + +``` +## On the ansible-configserver server +# get the scripts +cd ~ +git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git +cd solace-openshift-quickstart/scripts +# substitute your own parameters for the following exports +# You can get the stack names e.g.: from the CloudFormation page of the AWS services console, +# see the 'Overview' tab of the *nested* OpenShiftStack and VPC substacks. +# You can get the access keys from the AWS services console IAM > Users > Security credentials. +export NESTEDOPENSHIFTSTACK_STACKNAME=XXXXXXXXXXXXXXXXXXXXX +export VPC_STACKNAME=XXXXXXXXXXXXXXXXXXXXX +export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXX +export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXX +# run the config script +./configureAWSOpenShift.sh +``` + +The script will end with listing the private IP of the *openshift-master* servers, one of which you will need to SSH into for the next step. The command to access it is `ssh ` with SSH agent forwarding enabled. + +Also verify you have access and can login to the OpenShift console. You can get the URL from the CloudFormation page of the AWS services console, see the 'Outputs' tab of the *nested* OpenShiftStack substack. + +![alt text](/docs/images/GetOpenShiftURL.png "Getting to OpenShift console URL") + +

OpenShift deployment example with nested OpenShiftStack, VPCStack, tabs, keys and values

+ + +### Step 2: Prepare your workspace + +**Important:** This and subsequent steps shall be executed on a host having the OpenShift client tools and able to reach your OpenShift cluster nodes - conveniently, this can be one of the *openshift-master* servers. + +> If using MiniShift, continue using your terminal. + +* SSH into your selected host and ensure you are logged in to OpenShift. If you used Step 1 to deploy OpenShift, the requested server URL is the same as the OpenShift console URL, the username is `admin` and the password is as specified in the CloudFormation template. Otherwise use the values specific to your environment. + +``` +## On an openshift-master server +oc whoami +# if not logged in yet +oc login +``` + +* The Solace OpenShift QuickStart project contains useful scripts to help you prepare an OpenShift project for event broker deployment. Retrieve the project in your selected host: + +``` +mkdir ~/workspace +cd ~/workspace +git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git +cd solace-openshift-quickstart +``` + +### Step 3: (Optional: only execute for Deployment option 1) Install the Helm v2 client and server-side tools + +This will deploy Helm in a dedicated "tiller-project" project. Do not use this project for your deployments. + +- First download the Helm v2 client. If using Windows, get the [Helm executable](https://storage.googleapis.com/kubernetes-helm/helm-v2.16.0-windows-amd64.zip ) and put it in a directory on your path. +```bash + # Download Helm v2 client, latest version if needed + curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash +``` + +- Use script to install the Helm v2 client and its Tiller server-side operator. +```bash + # Setup local Helm client + helm init --client-only + # Install Tiller server-side operator into a new "tiller-project" + oc new-project tiller-project + oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller-project" -p HELM_VERSION=v2.16.0 | oc create -f - + oc rollout status deployment tiller + # also let Helm know where Tiller was deployed + export TILLER_NAMESPACE=tiller-project +``` + +### Step 4: Create a new OpenShift project to host the event broker deployment + +This will create a new project for deployments if needed or you can use your existing project except "helm" (the "helm" project has special privileges assigned which shall not be used for deployments). +``` +oc new-project solace-pubsub # adjust your project name as needed here and in subsequent commands +``` + +### Step 5: Optional: Load the event broker (Docker image) to your Docker Registry + +Deployment scripts will pull the Solace PubSub+ image from a [Docker registry](https://docs.Docker.com/registry/ ). There are several [options which registry to use](https://docs.openshift.com/container-platform/3.11/architecture/infrastructure_components/image_registry.html#overview ) depending on the requirements of your project, see some examples in (Part II) of this step. + +**Hint:** You may skip the rest of this step if using the free PubSub+ Standard Edition available from the [Solace public Docker Hub registry](https://hub.Docker.com/r/solace/solace-pubsub-standard/tags/ ). The Docker Registry URL to use will be `solace/solace-pubsub-standard:`. + +* **(Part I)** Download a copy of the event broker Docker image. + + Go to the Solace Developer Portal and download the Solace PubSub+ as a **Docker** image or obtain your version from Solace Support. + + * If using Solace PubSub+ Enterprise Evaluation Edition, go to the Solace Downloads page. For the image reference, copy and use the download URL in the Solace PubSub+ Enterprise Evaluation Edition Docker Images section. + + | PubSub+ Enterprise Evaluation Edition
Docker Image + | :---: | + | 90-day trial version of PubSub+ Enterprise | + | [Get URL of Evaluation Docker Image](http://dev.solace.com/downloads#eval ) | + + +* **(Part II)** Deploy the event broker Docker image to your Docker registry of choice + + Options include: + + * You can choose to use [OpenShift's Docker registry.](https://docs.openshift.com/container-platform/3.10/install_config/registry/deploy_registry_existing_clusters.html ). For MiniShift a simple option is to use the [Minishift Docker daemon](//docs.okd.io/latest/minishift/using/docker-daemon.html). + + * **(Optional / ECR)** You can utilize the AWS Elastic Container Registry (ECR) to host the event broker Docker image. For more information, refer to [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/ ). If you are using ECR as your Docker registry then you must add the ECR login credentials (as an OpenShift secret) to your event broker HA deployment. This project contains a helper script to execute this step: + +```shell + # Required if using ECR for Docker registry + cd ~/workspace/solace-openshift-quickstart/scripts + sudo su + aws configure # provide AWS config for root; provide your key ID, key and region. + ./addECRsecret.sh solace-pubsub # adjust your project name as needed +``` + + Here is an outline of the additional steps required if loading an image to ECR: + + * Copy the Solace Docker image location and download the image archive locally using the `wget ` command. + * Load the downloaded image to the local docker image repo using the `docker load -i ` command + * Go to your target ECR repository in the [AWS ECR Repositories console](https://console.aws.amazon.com/ecr ) and get the push commands information by clicking on the "View push commands" button. + * Start from the `docker tag` command to tag the image you just loaded. Use `docker images` to find the Solace Docker image just loaded. You may need to use + * Finally, use the `docker push` command to push the image. + * Exit from superuser to normal user + +![alt text](/docs/images/ECR-Registry.png "ECR Registry") + +
+ +For general additional information, refer to the [Using private registries](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#using-private-registries) section in the general Event Broker in Kubernetes Documentation. + +### Step 6-Option 1: Deploy the event broker using Helm + +Deploying using Helm provides more flexibility in terms of event broker deployment options, compared to those offered by the OpenShift templates provided by this project. + +More information is provided in the following documents: +* [Solace PubSub+ on Kubernetes Deployment Guide](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md) +* [Kubernetes Deployment Quick Start Guide](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/README.md) + +The deployment is using PubSub+ Software Event Broker Helm charts and customized by overriding [default chart parameters](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg/pubsubplus#configuration). + +Consult the [Deployment Considerations](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#pubsub-event-broker-deployment-considerations) section of the general Event Broker in Kubernetes Documentation when planning your deployment. + +In particular, the `securityContext.enabled` parameter must be set to `false`, indicating not to use the provided pod security context but let OpenShift set it, using SecurityContextConstraints (SCC). By default OpenShift will use the "restricted" SCC. + +By default the publicly available [latest Docker image of PubSub+ Standard Edition](https://hub.Docker.com/r/solace/solace-pubsub-standard/tags/) will be used. [Load a different image into a registry](#step-5-optional-load-the-event-broker-docker-image-to-your-docker-registry) if required. If using a different image, add the `image.repository=,image.tag=` values to the `--set` commands below, comma-separated. + +Solace PubSub+ can be vertically scaled by deploying in one of the [client connection scaling tiers](//docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Scaling-Tier-Resources.htm), controlled by the `solace.size` chart parameter. + +Next an HA and a non-HA deployment examples are provided, using default parameters. For configuration options, refer to the [Solace PubSub+ Advanced Event Broker Helm Chart](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg/pubsubplus) reference. +After initiating a deployment with one of the commands below skip to the [Validating the Deployment](#validating-the-deployment) section. + +- **Important**: For each new project using Helm v2, grant admin access to the server-side Tiller service from the "tiller-project" and set the TILLER_NAMESPACE environment, which is used by the Helm client to locate where Tiller has been deployed. +```bash + oc policy add-role-to-user admin "system:serviceaccount:tiller-project:tiller" + # if not already exported, ensure Helm knows where Tiller was deployed + export TILLER_NAMESPACE=tiller-project +``` + +> Ensure each command-line session has the TILLER_NAMESPACE environment variable properly set! + +HA deployment example: + +```bash +# One-time action: Add the PubSub+ charts to local Helm +helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts +# Initiate the HA deployment +helm install --name my-ha-release \ + --set securityContext.enabled=false,solace.redundancy=true,solace.usernameAdminPassword= \ + solacecharts/pubsubplus +# Check the notes printed on screen +# Wait until all pods running and ready and the active event broker pod label is "active=true" +oc get pods --show-labels -w +``` + +Single-node, non-HA deployment example: + +```bash +# One-time action: Add the PubSub+ charts to local Helm +helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts +# Initiate the non-HA deployment +helm install --name my-nonha-release \ + --set securityContext.enabled=false,solace.redundancy=true,solace.usernameAdminPassword= \ + solacecharts/pubsubplus +# Check the notes printed on screen +# Wait until the event broker pod is running, ready and the pod label is "active=true" +oc get pods --show-labels -w +``` + +Note: an alternative to longer `--set` parameters is to define same parameter values in a YAML file: +```yaml +# Create example values file +echo " +securityContext + enabled: false +solace + redundancy: true, + usernameAdminPassword: " > deployment-values.yaml +# Use values file +helm install --name my-release \ + -v deployment-values.yaml \ + solacecharts/pubsubplus +``` + +### Step 6-Option 2: Deploy the event broker using the OpenShift templates included in this project + +This deployment is using OpenShift templates and don't require Helm: + +**Prerequisites:** +1. Determine your event broker disk space requirements. We recommend a minimum of 30 gigabytes of disk space. +2. Define a strong password for the 'admin' user of the event broker and then base64 encode the value. This value will be specified as a parameter when processing the event broker OpenShift template: +``` +echo -n 'strong@dminPw!' | base64 +``` +3. Switch to the templates directory: +``` +oc project solace-pubsub # adjust your project name as needed +cd ~/workspace/solace-openshift-quickstart/templates +``` + +**Deploy the event broker:** + +You can deploy the event broker in either a single-node or high-availability configuration. + +Note: DOCKER_REGISTRY_URL and EVENTBROKER_IMAGE_TAG default to `solace/solace-pubsub-standard` and `latest`, EVENTBROKER_STORAGE_SIZE defaults to 30Gi. + +The template by default provides for a small-footprint Solace PubSub+ deployment deployable in MiniShift. Adjust `export system_scaling_maxconnectioncount` in the template for higher scaling but ensure adequate resources are available to the pod(s). Refer to the [System Requirements in the Solace documentation](//docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Scaling-Tier-Resources.htm). + +Also note that if a deployment failed and then deleted using `oc delete -f`, ensure to delete any remaining PVCs. Failing to do so and retrying using the same deployment name will result in an already used PV volume mounted and the pod(s) may not come up. + +* For a **Single-Node** configuration: + * Process the Solace 'Single Node' OpenShift template to deploy the event broker in a single-node configuration. Specify values for the DOCKER_REGISTRY_URL, EVENTBROKER_IMAGE_TAG, EVENTBROKER_STORAGE_SIZE, and EVENTBROKER_ADMIN_PASSWORD parameters: +``` +oc project solace-pubsub # adjust your project name as needed +cd ~/workspace/solace-openshift-quickstart/templates +oc process -f eventbroker_singlenode_template.yaml DEPLOYMENT_NAME=test-singlenode DOCKER_REGISTRY_URL= EVENTBROKER_IMAGE_TAG= EVENTBROKER_STORAGE_SIZE=30Gi EVENTBROKER_ADMIN_PASSWORD= | oc create -f - +# Wait until all pods running and ready +watch oc get statefulset,service,pods,pvc,pv +``` + +* For a **High-Availability** configuration: + * Process the Solace 'HA' OpenShift template to deploy the event broker in a high-availability configuration. Specify values for the DOCKER_REGISTRY_URL, EVENTBROKER_IMAGE_TAG, EVENTBROKER_STORAGE_SIZE, and EVENTBROKER_ADMIN_PASSWORD parameters: +``` +oc project solace-pubsub # adjust your project name as needed +cd ~/workspace/solace-openshift-quickstart/templates +oc process -f eventbroker_ha_template.yaml DEPLOYMENT_NAME=test-ha DOCKER_REGISTRY_URL= EVENTBROKER_IMAGE_TAG= EVENTBROKER_STORAGE_SIZE=30Gi EVENTBROKER_ADMIN_PASSWORD= | oc create -f - +# Wait until all pods running and ready +watch oc get statefulset,service,pods,pvc,pv +``` + +## Validating the Deployment + +If there are any issues with the deployment, refer to the [Kubernetes Troubleshooting Guide](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#troubleshooting) - substitute any `kubectl` commands with `oc` commands. Before retrying a deployment, ensure to delete PVCs remaining from the unsuccessful deployment - use `oc get pvc` to determine which ones. + +Now you can validate your deployment from the OpenShift client shell: + +``` +[ec2-user@ip-10-0-23-198 ~]$ oc get statefulset,service,pods,pvc,pv --show-labels +NAME DESIRED CURRENT AGE LABELS +statefulset.apps/my-release-pubsubplus 3 3 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/managed-by=Tiller,app.kubernetes.io/name=pubsubplus,helm.sh/chart=pubsubplus-1.0.0 + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS +service/my-release-pubsubplus LoadBalancer 172.30.44.13 a7d53a67e0d3911eaab100663456a67b-73396344.eu-central-1.elb.amazonaws.com 22:32084/TCP,8080:31060/TCP,943:30321/TCP,55555:32434/TCP,55003:32160/TCP,55443:30635/TCP,80:30142/TCP,443:30411/TCP,5672:30595/TCP,1883:30511/TCP,9000:32277/TCP 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/managed-by=Tiller,app.kubernetes.io/name=pubsubplus,helm.sh/chart=pubsubplus-1.0.0 +service/my-release-pubsubplus-discovery ClusterIP None 8080/TCP,8741/TCP,8300/TCP,8301/TCP,8302/TCP 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/managed-by=Tiller,app.kubernetes.io/name=pubsubplus,helm.sh/chart=pubsubplus-1.0.0 + +NAME READY STATUS RESTARTS AGE LABELS +pod/my-release-pubsubplus-0 1/1 Running 0 2h active=true,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus,controller-revision-hash=my-release-pubsubplus-7b788f768b,statefulset.kubernetes.io/pod-name=my-release-pubsubplus-0 +pod/my-release-pubsubplus-1 1/1 Running 0 2h active=false,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus,controller-revision-hash=my-release-pubsubplus-7b788f768b,statefulset.kubernetes.io/pod-name=my-release-pubsubplus-1 +pod/my-release-pubsubplus-2 1/1 Running 0 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus,controller-revision-hash=my-release-pubsubplus-7b788f768b,statefulset.kubernetes.io/pod-name=my-release-pubsubplus-2 + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE LABELS +persistentvolumeclaim/data-my-release-pubsubplus-0 Bound pvc-7d596ac0-0d39-11ea-ab10-0663456a67be 30Gi RWO gp2 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus +persistentvolumeclaim/data-my-release-pubsubplus-1 Bound pvc-7d5c60e9-0d39-11ea-ab10-0663456a67be 30Gi RWO gp2 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus +persistentvolumeclaim/data-my-release-pubsubplus-2 Bound pvc-7d5f8838-0d39-11ea-ab10-0663456a67be 30Gi RWO gp2 2h app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus + +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS +persistentvolume/pvc-58223d93-0b93-11ea-833a-0246f4c5a982 10Gi RWO Delete Bound openshift-infra/metrics-cassandra-1 gp2 2d failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1c +persistentvolume/pvc-7d596ac0-0d39-11ea-ab10-0663456a67be 30Gi RWO Delete Bound solace-pubsub/data-my-release-pubsubplus-0 gp2 2h failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1c +persistentvolume/pvc-7d5c60e9-0d39-11ea-ab10-0663456a67be 30Gi RWO Delete Bound solace-pubsub/data-my-release-pubsubplus-1 gp2 2h failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a +persistentvolume/pvc-7d5f8838-0d39-11ea-ab10-0663456a67be 30Gi RWO Delete Bound solace-pubsub/data-my-release-pubsubplus-2 gp2 2h failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b +[ec2-user@ip-10-0-23-198 ~]$ +[ec2-user@ip-10-0-23-198 ~]$ +[ec2-user@ip-10-0-23-198 ~]$ oc describe svc +Name: my-release-pubsubplus +Namespace: solace-pubsub +Labels: app.kubernetes.io/instance=my-release + app.kubernetes.io/managed-by=Tiller + app.kubernetes.io/name=pubsubplus + helm.sh/chart=pubsubplus-1.0.0 +Annotations: +Selector: active=true,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus +Type: LoadBalancer +IP: 172.30.44.13 +LoadBalancer Ingress: a7d53a67e0d3911eaab100663456a67b-73396344.eu-central-1.elb.amazonaws.com +Port: ssh 22/TCP +TargetPort: 2222/TCP +NodePort: ssh 32084/TCP +Endpoints: 10.131.0.17:2222 +Port: semp 8080/TCP +TargetPort: 8080/TCP +NodePort: semp 31060/TCP +Endpoints: 10.131.0.17:8080 +Port: semptls 943/TCP +TargetPort: 60943/TCP +NodePort: semptls 30321/TCP +Endpoints: 10.131.0.17:60943 +Port: smf 55555/TCP +TargetPort: 55555/TCP +NodePort: smf 32434/TCP +Endpoints: 10.131.0.17:55555 +Port: smfcomp 55003/TCP +TargetPort: 55003/TCP +NodePort: smfcomp 32160/TCP +Endpoints: 10.131.0.17:55003 +Port: smftls 55443/TCP +TargetPort: 55443/TCP +NodePort: smftls 30635/TCP +Endpoints: 10.131.0.17:55443 +Port: web 80/TCP +TargetPort: 60080/TCP +NodePort: web 30142/TCP +Endpoints: 10.131.0.17:60080 +Port: webtls 443/TCP +TargetPort: 60443/TCP +NodePort: webtls 30411/TCP +Endpoints: 10.131.0.17:60443 +Port: amqp 5672/TCP +TargetPort: 5672/TCP +NodePort: amqp 30595/TCP +Endpoints: 10.131.0.17:5672 +Port: mqtt 1883/TCP +TargetPort: 1883/TCP +NodePort: mqtt 30511/TCP +Endpoints: 10.131.0.17:1883 +Port: rest 9000/TCP +TargetPort: 9000/TCP +NodePort: rest 32277/TCP +Endpoints: 10.131.0.17:9000 +Session Affinity: None +External Traffic Policy: Cluster +Events: +``` + +Find the **'LoadBalancer Ingress'** value listed in the service description above. This is the publicly accessible Solace Connection URI for messaging clients and management. In the example it is `a7d53a67e0d3911eaab100663456a67b-73396344.eu-central-1.elb.amazonaws.com`. + +> Note: If using MiniShift an additional step is required to expose the service: `oc get --export svc my-release-pubsubplus`. This will return a service definition with nodePort port numbers for each message router service. Use these port numbers together with MiniShift's public IP address which can be obtained from the command `minishift ip`. + +### Viewing Bringup logs + +To see the deployment events, navigate to: + +* **OpenShift UI > (Your Project) > Applications > Stateful Sets > ((name)-pubsubplus) > Events** + +You can access the log stack for individual event broker pods from the OpenShift UI, by navigating to: + +* **OpenShift UI > (Your Project) > Applications > Stateful Sets > ((name)-pubsubplus) > Pods > ((name)-solace-(N)) > Logs** + +![alt text](/docs/images/Solace-Pod-Log-Stack.png "Event Broker Pod Log Stack") + +Where (N) above is the ordinal of the Solace PubSub+: + * 0 - Primary event broker + * 1 - Backup event broker + * 2 - Monitor event broker + +## Gaining Admin and SSH access to the event broker + +The external management host URI will be the Solace Connection URI associated with the load balancer generated by the event broker OpenShift template. Access will go through the load balancer service as described in the introduction and will always point to the active event broker. The default port is 22 for CLI and 8080 for SEMP/SolAdmin. + +If you deployed OpenShift in AWS, then the Solace OpenShift QuickStart will have created an EC2 Load Balancer to front the event broker / OpenShift service. The Load Balancer public DNS name can be found in the AWS EC2 console under the 'Load Balancers' section. + +To launch Solace CLI or SSH into the individual event broker instances from the OpenShift CLI use: + +``` +# CLI access +oc exec -it XXX-XXX-pubsubplus-X cli # adjust pod name to your deployment +# shell access +oc exec -it XXX-XXX-pubsubplus-X bash # adjust pod name to your deployment +``` + +> Note for MiniShift: if using Windows you may get an error message: `Unable to use a TTY`. Install and preceed above commands with `winpty` until this is fixed in the MiniShift project. + + +You can also gain access to the Solace CLI and container shell for individual event broker instances from the OpenShift UI. A web-based terminal emulator is available from the OpenShift UI. Navigate to an individual event broker Pod using the OpenShift UI: + +* **OpenShift UI > (Your Project) > Applications > Stateful Sets > ((name)-pubsubplus) > Pods > ((name)-pubsubplus-(N)) > Terminal** + +Once you have launched the terminal emulator to the event broker pod you may access the Solace CLI by executing the following command: + +``` +/usr/sw/loads/currentload/bin/cli -A +``` + +![alt text](/docs/images/Solace-Primary-Pod-Terminal-CLI.png "Event Broker CLI via OpenShift UI Terminal emulator") + +See the [Solace Kubernetes Quickstart README](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#gaining-admin-access-to-the-event-broker ) for more details including admin and SSH access to the individual event brokers. + +## Testing data access to the event broker + +To test data traffic though the newly created event broker instance, visit the Solace Developer Portal and select your preferred programming language to [send and receive messages](http://dev.solace.com/get-started/send-receive-messages/ ). Under each language there is a Publish/Subscribe tutorial that will help you get started. + +Note: the Host will be the Solace Connection URI. It may be necessary to [open up external access to a port](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#modifying-or-upgrading-a-deployment ) used by the particular messaging API if it is not already exposed. + +![alt text](/docs/images/solace_tutorial.png "getting started publish/subscribe") + +
+ +## Deleting a deployment + +### Deleting the PubSub+ deployment + +To delete the deployment or to start over from Step 6 in a clean state: + +* If used (Option 1) Helm to deploy, execute: + +``` +helm list # will list the releases (deployments) +helm delete XXX-XXX # will delete instances related to your deployment - "my-release" in the example above +``` + +* If used (Option 2) OpenShift templates to deploy, use: + +``` +cd ~/workspace/solace-openshift-quickstart/templates +oc process -f DEPLOYMENT_NAME= | oc delete -f - +``` + +**Note:** Above will not delete dynamic Persistent Volumes (PVs) and related Persistent Volume Claims (PVCs). If recreating the deployment with same name and keeping the original PVCs, the original volumes get mounted with existing configuration. Deleting the PVCs will also delete the PVs: + +``` +# List PVCs +oc get pvc +# Delete unneeded PVCs +oc delete pvc +``` + +To remove the project or to start over from Step 4 in a clean state, delete the project using the OpenShift console or the command line. For more details, refer to the [OpenShift Projects](https://docs.openshift.com/enterprise/3.0/dev_guide/projects.html ) documentation. + +``` +oc delete project solace-pubsub # adjust your project name as needed +``` + +### Deleting the AWS OpenShift Container Platform deployment + +To delete your OpenShift Container Platform deployment that was set up at Step 1, first you need to detach the IAM policies from the ‘Setup Role’ (IAM) that were attached in (Part II) of Step 1. Then you also need to ensure to free up the allocated OpenShift entitlements from your subscription otherwise they will no longer be available for a subsequent deployment. + +Use this quick start's script to automate the execution of the required steps. SSH into the *ansible-configserver* then follow the commands: + +``` +# assuming solace-openshift-quickstart/scripts are still available from Step 1 +cd ~/solace-openshift-quickstart/scripts +./prepareDeleteAWSOpenShift.sh +``` + +Now the OpenShift stack delete can be initiated from the AWS CloudFormation console. + +## Special topics + +### Using NFS for persistent storage + +The Solace PubSub+ supports NFS for persistent storage, with "root_squash" option configured on the NFS server. + +For an example deployment, specify the storage class from your NFS deployment ("nfs" in this example) in the `storage.useStorageClass` parameter and ensure `storage.slow` is set to `true`. + +The Helm (NFS Server Provisioner)[https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner ] project is an example of a dynamic NFS server provisioner. Here are the steps to get going with it: + +``` +# Create the required SCC +sudo oc apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs/deploy/kubernetes/scc.yaml +# Install the NFS helm chart, which will create all dependencies +helm install stable/nfs-server-provisioner --name nfs-test --set persistence.enabled=true,persistence.size=100Gi +# Ensure the "nfs-provisioner" service account got created +oc get serviceaccounts +# Bind the SCC to the "nfs-provisioner" service account +sudo oc adm policy add-scc-to-user nfs-provisioner -z nfs-test-nfs-server-provisioner +# Ensure the NFS server pod is up and running +oc get pod nfs-test-nfs-server-provisioner-0 +``` + +If using templates top deploy locate the volume mont for `softAdb` in the template and disable it by commenting it out: + +```yaml +# only mount softAdb when not using NFS, comment it out otherwise +#- name: data +# mountPath: /usr/sw/internalSpool/softAdb +# subPath: softAdb +``` + +## Resources + +For more information about Solace technology in general please visit these resources: + +* The Solace Developer Portal website at: http://dev.solace.com +* Understanding [Solace technology.](http://dev.solace.com/tech/) +* Ask the [Solace community](http://dev.solace.com/community/). \ No newline at end of file diff --git a/resources/ECR-Registry.png b/docs/images/ECR-Registry.png similarity index 100% rename from resources/ECR-Registry.png rename to docs/images/ECR-Registry.png diff --git a/resources/GetOpenShiftURL.png b/docs/images/GetOpenShiftURL.png similarity index 100% rename from resources/GetOpenShiftURL.png rename to docs/images/GetOpenShiftURL.png diff --git a/resources/Solace-HA-StatefulSet-Pods.png b/docs/images/Solace-HA-StatefulSet-Pods.png similarity index 100% rename from resources/Solace-HA-StatefulSet-Pods.png rename to docs/images/Solace-HA-StatefulSet-Pods.png diff --git a/resources/Solace-HA-StatefulSet.png b/docs/images/Solace-HA-StatefulSet.png similarity index 100% rename from resources/Solace-HA-StatefulSet.png rename to docs/images/Solace-HA-StatefulSet.png diff --git a/resources/Solace-HA-Storage.png b/docs/images/Solace-HA-Storage.png similarity index 100% rename from resources/Solace-HA-Storage.png rename to docs/images/Solace-HA-Storage.png diff --git a/resources/Solace-Pod-Log-Stack.png b/docs/images/Solace-Pod-Log-Stack.png similarity index 100% rename from resources/Solace-Pod-Log-Stack.png rename to docs/images/Solace-Pod-Log-Stack.png diff --git a/resources/Solace-Primary-Pod-Events.png b/docs/images/Solace-Primary-Pod-Events.png similarity index 100% rename from resources/Solace-Primary-Pod-Events.png rename to docs/images/Solace-Primary-Pod-Events.png diff --git a/resources/Solace-Primary-Pod-Terminal-CLI.png b/docs/images/Solace-Primary-Pod-Terminal-CLI.png similarity index 100% rename from resources/Solace-Primary-Pod-Terminal-CLI.png rename to docs/images/Solace-Primary-Pod-Terminal-CLI.png diff --git a/resources/network_diagram.jpg b/docs/images/network_diagram.jpg similarity index 100% rename from resources/network_diagram.jpg rename to docs/images/network_diagram.jpg diff --git a/resources/solace_tutorial.png b/docs/images/solace_tutorial.png similarity index 100% rename from resources/solace_tutorial.png rename to docs/images/solace_tutorial.png diff --git a/readme.md b/readme.md index 7c32b47..683d519 100644 --- a/readme.md +++ b/readme.md @@ -1,515 +1,173 @@ -# Deploying a Solace PubSub+ Software Message Broker onto an OpenShift 3.10 or 3.11 platform +# Deploying a Solace PubSub+ Software Event Broker onto an OpenShift 3.11 platform -## Purpose of this Repository +The [Solace PubSub+ Platform](https://solace.com/products/platform/)'s [software event broker](https://solace.com/products/event-broker/software/) efficiently streams event-driven information between applications, IoT devices and user interfaces running in the cloud, on-premises, and hybrid environments using open APIs and protocols like AMQP, JMS, MQTT, REST and WebSocket. It can be installed into a variety of public and private clouds, PaaS, and on-premises environments, and brokers in multiple locations can be linked together in an [event mesh](https://solace.com/what-is-an-event-mesh/) to dynamically share events across the distributed enterprise. -This repository provides an example of how to deploy Solace PubSub+ software message brokers onto an OpenShift 3.10 or 3.11 platform. There are [multiple ways](https://docs.openshift.com/index.html ) to get to an OpenShift platform, including [MiniShift](https://github.com/minishift/minishift#welcome-to-minishift ). This guide will specifically use the Red Hat OpenShift Container Platform for deploying an HA group but concepts are transferable to other compatible platforms. There will be also hints on how to set up a simple single-node MiniKube deployment using MiniShift for development, testing or proof of concept purposes. Instructions also apply to earlier OpenShift versions (3.7 and later). +## Overview -For the Red Hat OpenShift Container Platform, we utilize the [RedHat OpenShift on AWS QuickStart](https://aws.amazon.com/quickstart/architecture/openshift/ ) project to deploy a Red Hat OpenShift Container Platform on AWS in a highly redundant configuration, spanning 3 zones. +This document provides a quick getting started guide to install a Solace PubSub+ Software Event Broker in various configurations onto an OpenShift 3.11 platform. -This repository expands on the [Solace Kubernetes Quickstart](https://github.com/SolaceProducts/solace-kubernetes-quickstart ) to provide an example of how to deploy Solace PubSub+ software message brokers in an HA configuration on the OpenShift Container Platform running in AWS. +Detailed OpenShift-specific documentation is provided in the [Solace PubSub+ on OpenShift Documentation](/docs/PubSubPlusOpenShiftDeployment.md). There is also a general [Solace PubSub+ on Kubernetes Documentation](//github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md) available, which the OpenShift deployment builds upon. -![alt text](/resources/network_diagram.jpg "Network Diagram") +This guide is intended mainly for development and demo purposes. The recommended Solace PubSub+ Software Event Broker version is 9.4 or later. -## Description of the Solace PubSub+ Software Message Broker +The PubSub+ deployment does not require any special OpenShift Security Context, the default "restricted" SCC can be used. -The Solace PubSub+ software message broker meets the needs of big data, cloud migration, and Internet-of-Things initiatives, and enables microservices and event-driven architecture. Capabilities include topic-based publish/subscribe, request/reply, message queues/queueing, and data streaming for IoT devices and mobile/web apps. The message broker supports open APIs and standard protocols including AMQP, JMS, MQTT, REST, and WebSocket. As well, it can be deployed in on-premise datacenters, natively within private and public clouds, and across complex hybrid cloud environments. +We recommend using the Helm tool for convenience. An alternative method [using OpenShift templates](/docs/PubSubPlusOpenShiftDeployment.md#step-6-option-2-deploy-the-event-broker-using-the-openshift-templates-included-in-this-project) is also available. -## How to deploy a Solace PubSub+ Message Broker onto OpenShift / AWS +## How to deploy Solace PubSub+ Software Event Broker -The following steps describe how to deploy a message broker onto an OpenShift environment. Optional steps are provided about setting up a Red Hat OpenShift Container Platform on Amazon AWS infrastructure (marked as Optional / AWS) and if you use AWS Elastic Container Registry to host the Solace message broker Docker image (marked as Optional / ECR). +The event broker can be deployed in either a 3-node High-Availability (HA) group, or as a single-node standalone deployment. For simple test environments that need only to validate application functionality, a single instance will suffice. Note that in production, or any environment where message loss cannot be tolerated, an HA deployment is required. -There are also two options for deploying a message broker onto your OpenShift deployment: -* (Deployment option 1, using Helm): This option allows great flexibility using the Kubernetes `Helm` tool to automate the process of message broker deployment through a wide range of configuration options including in-service rolling upgrade of the message broker. The [Solace Kubernetes QuickStart project](https://github.com/SolaceProducts/solace-kubernetes-quickstart ) will be referred to deploy the message broker onto your OpenShift environment. -* (Deployment option 2, using OpenShift templates): This option can be used directly, without any additional tool to deploy the message broker in a limited number of configurations, using OpenShift templates included in this project. +In this quick start we go through the steps to set up an event broker using [Solace PubSub+ Helm charts](//hub.helm.sh/charts/solace). -This is a 6 steps process with some steps being optional. Steps to deploy the message broker: +There are three Helm chart variants available with default small-size configurations: +1. `pubsubplus-dev` - minimum footprint PubSub+ for Developers (standalone) +2. `pubsubplus` - PubSub+ standalone, supporting 100 connections +3. `pubsubplus-ha` - PubSub+ HA, supporting 100 connections -**Hint:** You may skip Step 1 if you already have your own OpenShift environment deployed. +For other event broker configurations or sizes, refer to the [PubSub+ Software Event Broker Helm Chart documentation](/pubsubplus/README.md). -> Note: If using MiniShift follow the [instructions to get to a working MiniShift deployment](https://docs.okd.io/latest/minishift/getting-started/index.html ). If using MiniShift in a Windows environment one easy way to follow the shell scripts in the subsequent steps of this guide is to use [Git BASH for Windows](https://gitforwindows.org/ ) and ensure any script files are using unix style line endings by running the `dostounix` tool if needed. +### 1. Get an OpenShift environment -### Step 1: (Optional / AWS) Deploy OpenShift Container Platform onto AWS using the RedHat OpenShift AWS QuickStart Project +There are [multiple ways](https://docs.openshift.com/index.html ) to get to an OpenShift 3.11 platform, including [MiniShift](https://github.com/minishift/minishift#welcome-to-minishift ). The [detailed Event Broker on OpenShift Documentation](/docs/PubSubPlusOpenShiftDeployment.md#step-1-optional--aws-deploy-openshift-container-platform-onto-aws-using-the-redhat-openshift-aws-quickstart-project) describes how to set up a production-ready Red Hat OpenShift Container Platform platform on AWS. -* (Part I) Log into the AWS Web Console and run the [OpenShift AWS QuickStart project](https://aws.amazon.com/quickstart/architecture/openshift/ ), which will use AWS CloudFormation for the deployment. We recommend you deploy OpenShift across 3 AWS Availability Zones for maximum redundancy. Please refer to the RedHat OpenShift AWS QuickStart guide and supporting documentation: +Log in as `admin` using the `oc login -u admin` command. - * [Deploying and Managing OpenShift on Amazon Web Services](https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_amazon_web_services/ ) - - **Important:** As described in above documentation, this deployment requires a Red Hat account with a valid Red Hat subscription to OpenShift and will consume 10 OpenShift entitlements in a maximum redundancy configuration. When no longer needed ensure to follow the steps in the [Deleting the OpenShift Container Platform deployment](#deleting-the-openshift-container-platform-deployment ) section of this guide to free up the entitlements. - - This deployment will create 10 EC2 instances: an *ansible-configserver* and three of each *openshift-etcd*, *openshift-master* and *openshift-nodes* servers.
- - **Note:** only the "*ansible-configserver*" is exposed externally in a public subnet. To access the other servers that are in a private subnet, first [SSH into](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html ) the *ansible-configserver* instance then use that instance as a bastion host to SSH into the target server using it's private IP. For that we recommend enabling [SSH agent forwarding](https://developer.github.com/v3/guides/using-ssh-agent-forwarding/ ) on your local machine to avoid the insecure option of copying and storing private keys remotely on the *ansible-configserver*. - -* (Part II) Once you have deployed OpenShift using the AWS QuickStart you will have to perform additional steps to re-configure OpenShift to integrate fully with AWS. For full details, please refer to the RedHat OpenShift documentation for configuring OpenShift for AWS: - - * [OpenShift > Configuring for AWS](https://docs.openshift.com/container-platform/3.10/install_config/configuring_aws.html ) - - To help with that this quick start provides a script to automate the execution of the required steps: - - * Add the required AWS IAM policies to the ‘Setup Role’ (IAM) used by the RedHat QuickStart to deploy OpenShift to AWS - * Tag public subnets so when creating a public service suitable public subnets can be found - * Re-configure OpenShift Masters and OpenShift Nodes to make OpenShift aware of AWS deployment specifics - - SSH into the *ansible-configserver* then follow the commands. - -``` -## On the ansible-configserver server -# get the scripts -cd ~ -git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git -cd solace-openshift-quickstart/scripts -# substitute your own parameters for the following exports -# You can get the stack names e.g.: from the CloudFormation page of the AWS services console, -# see the 'Overview' tab of the *nested* OpenShiftStack and VPC substacks. -# You can get the access keys from the AWS services console IAM > Users > Security credentials. -export NESTEDOPENSHIFTSTACK_STACKNAME=XXXXXXXXXXXXXXXXXXXXX -export VPC_STACKNAME=XXXXXXXXXXXXXXXXXXXXX -export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXX -export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXX -# run the config script -./configureAWSOpenShift.sh +Check to ensure your OpenShift environment is ready: +```bash +# This shall return current user +oc whoami ``` -The script will end with listing the private IP of the *openshift-master* servers, one of which you will need to SSH into for the next step. The command to access it is `ssh ` with SSH agent forwarding enabled. - -Also verify you have access and can login to the OpenShift console. You can get the URL from the CloudFormation page of the AWS services console, see the 'Outputs' tab of the *nested* OpenShiftStack substack. - -![alt text](/resources/GetOpenShiftURL.png "Getting to OpenShift console URL") - -

OpenShift deployment example with nested OpenShiftStack, VPCStack, tabs, keys and values

- +### 2. Install and configure Helm -### Step 2: Prepare your workspace +Note that Helm is transitioning from v2 to v3. Many deployments still use v2. PubSub+ can be deployed using either version, however concurrent use of v2 and v3 from the same command-line environment is not supported. Also note that there is a known [issue with using Helm v3 with OpenShift objects](https://bugzilla.redhat.com/show_bug.cgi?id=1773682) and until resolved Helm v2 is recommended. -**Important:** This and subsequent steps shall be executed on a host having the OpenShift client tools and able to reach your OpenShift cluster nodes - conveniently, this can be one of the *openshift-master* servers. - -> If using MiniShift, continue using your terminal. - -* SSH into your selected host and ensure you are logged in to OpenShift. If you used Step 1 to deploy OpenShift, the requested server URL is the same as the OpenShift console URL, the username is `admin` and the password is as specified in the CloudFormation template. Otherwise use the values specific to your environment. +
Instructions for Helm v2 setup +

+- First download the Helm v2 client. If using Windows, get the [Helm executable](https://storage.googleapis.com/kubernetes-helm/helm-v2.16.0-windows-amd64.zip ) and put it in a directory on your path. +```bash + # Download Helm v2 client, latest version if needed + curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash ``` -## On an openshift-master server -oc whoami -# if not logged in yet -oc login -``` - -* The Solace OpenShift QuickStart project contains useful scripts to help you prepare an OpenShift project for message broker deployment. Retrieve the project in your selected host: +- Use script to install the Helm v2 client and its Tiller server-side operator. This will deploy Tiller in a dedicated project. Do not use this project for your deployments. +```bash + # Setup local Helm client + helm init --client-only + # Install Tiller server-side operator into a new "tiller-project" + oc new-project tiller-project + oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller-project" -p HELM_VERSION=v2.16.0 | oc create -f - + oc rollout status deployment tiller + # also let Helm know where Tiller was deployed + export TILLER_NAMESPACE=tiller-project ``` -mkdir ~/workspace -cd ~/workspace -git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git -cd solace-openshift-quickstart -``` - -### Step 3: (Optional: only execute for Deployment option 1 - use the Solace Kubernetes QuickStart to deploy the message broker) Install the Helm client and server-side tools -* **(Part I)** Use the ‘deployHelm.sh’ script to deploy the Helm client and server-side components. Begin by installing the Helm client tool: +

+
-> If using MiniShift, get the [Helm executable](https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-windows-amd64.zip ) and put it in a directory on your path before running the following script. +
Instructions for Helm v3 setup +

+- Use the [instructions from Helm](//github.com/helm/helm#install) or if using Linux simply run: +```bash + curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash ``` -cd ~/workspace/solace-openshift-quickstart/scripts -./deployHelm.sh client -# Copy and run the export statuments from the script output! -``` +

+
- **Important:** After running the above script, note the **export** statements for the following environment variables from the output - copy and run them. It is also recommended to add them to `~/.bashrc` on your machine so they are automatically sourced at future sessions (These environment variables are required every time when running the `helm` client tool). +Helm is configured properly if the command `helm version` returns no error. - -* **(Part II)** Install the Helm server-side ‘Tiller’ component: -``` -cd ~/workspace/solace-openshift-quickstart/scripts -./deployHelm.sh server -``` - -### Step 4: Create and configure a project to host the message broker deployment - -* Use the ‘prepareProject.sh’ script the Solace OpenShift QuickStart to create and configure an OpenShift project that meets requirements of the message broker deployment: +### 3. Install Solace PubSub+ Software Event Broker with default configuration +- Add the Solace Helm charts to your local Helm repo: +```bash + helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts ``` -# If using Minishift start with this command: oc login -u system:admin -cd ~/workspace/solace-openshift-quickstart/scripts -sudo ./prepareProject.sh solace-pubsub # adjust your project name as needed here and in subsequent commands -# In Minishift return to admin user: oc login -u admin -``` - -> Note: The purpose of using `sudo` is to elevate `admin` user to `system:admin`. This is not available when using MiniShift and apply above workaround for just this step. - -### Step 5: Optional: Load the message broker (Docker image) to your Docker Registry - -Deployment scripts will pull the Solace message broker image from a [Docker registry](https://docs.Docker.com/registry/ ). There are several [options which registry to use](https://docs.openshift.com/container-platform/3.10/architecture/infrastructure_components/image_registry.html#overview ) depending on the requirements of your project, see some examples in (Part II) of this step. - -**Hint:** You may skip the rest of this step if using the free PubSub+ Standard Edition available from the [Solace public Docker Hub registry](https://hub.Docker.com/r/solace/solace-pubsub-standard/tags/ ). The Docker Registry URL to use will be `solace/solace-pubsub-standard:`. - -* **(Part I)** Download a copy of the message broker Docker image. - - Go to the Solace Developer Portal and download the Solace PubSub+ software message broker as a **Docker** image or obtain your version from Solace Support. - - * If using Solace PubSub+ Enterprise Evaluation Edition, go to the Solace Downloads page. For the image reference, copy and use the download URL in the Solace PubSub+ Enterprise Evaluation Edition Docker Images section. - - | PubSub+ Enterprise Evaluation Edition
Docker Image - | :---: | - | 90-day trial version of PubSub+ Enterprise | - | [Get URL of Evaluation Docker Image](http://dev.solace.com/downloads#eval ) | - - -* **(Part II)** Deploy the message broker Docker image to your Docker registry of choice - Options include: +- By default the publicly available [latest Docker image of PubSub+ Standard Edition](https://hub.Docker.com/r/solace/solace-pubsub-standard/tags/) will be used. [Load a different image into a registry](/docs/PubSubPlusOpenShiftDeployment.md#step-5-optional-load-the-event-broker-docker-image-to-your-docker-registry) if required. If using a different image, add the `image.repository=,image.tag=` values to the `--set` commands below, comma-separated. - * You can choose to use [OpenShift's Docker registry.](https://docs.openshift.com/container-platform/3.10/install_config/registry/deploy_registry_existing_clusters.html ). For MiniShift a simple option is to use the [Minishift Docker daemon](//docs.okd.io/latest/minishift/using/docker-daemon.html). - - * **(Optional / ECR)** You can utilize the AWS Elastic Container Registry (ECR) to host the message broker Docker image. For more information, refer to [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/ ). If you are using ECR as your Docker registry then you must add the ECR login credentials (as an OpenShift secret) to your message broker HA deployment. This project contains a helper script to execute this step: - -```shell - # Required if using ECR for Docker registry - cd ~/workspace/solace-openshift-quickstart/scripts - sudo su - aws configure # provide AWS config for root; provide your key ID and key, leave the rest to None. - ./addECRsecret.sh solace-pubsub # adjust your project name as needed +- Create or switch to your project +```bash + oc new-project solace-pubsub ``` - Here is an outline of the additional steps required if loading an image to ECR: - - * Copy the Solace Docker image location and download the image archive locally using the `wget ` command. - * Load the downloaded image to the local docker image repo using the `docker load -i ` command - * Go to your target ECR repository in the [AWS ECR Repositories console](https://console.aws.amazon.com/ecr ) and get the push commands information by clicking on the "View push commands" button. - * Start from the `docker tag` command to tag the image you just loaded. Use `docker images` to find the Solace Docker image just loaded. You may need to use - * Finally, use the `docker push` command to push the image. - * Exit from superuser to normal user - -![alt text](/resources/ECR-Registry.png "ECR Registry") - -### Step 6: (Option 1) Deploy the message broker using the Solace Kubernetes QuickStart - -If you require more flexibility in terms of message broker deployment options (compared to those offered by the OpenShift templates provided by this project) then use the [Solace Kubernetes QuickStart](https://github.com/SolaceProducts/solace-kubernetes-quickstart ) to deploy the message broker: - -* Retrieve the Solace Kubernetes QuickStart from GitHub: +
Instructions using Helm v2 +

+- **Important**: For each new project using Helm v2, grant admin access to the server-side Tiller service from the "tiller-project" and set the TILLER_NAMESPACE environment. +```bash + oc policy add-role-to-user admin "system:serviceaccount:tiller-project:tiller" + # if not already exported, ensure Helm knows where Tiller was deployed + export TILLER_NAMESPACE=tiller-project ``` -cd ~/workspace -git clone https://github.com/SolaceProducts/solace-kubernetes-quickstart.git -cd solace-kubernetes-quickstart -``` - -* Update the Solace Kubernetes Helm chart values.yaml configuration file for your target deployment with the help of the Kubernetes quick start `configure.sh` script. (Please refer to the [Solace Kubernetes QuickStart](https://github.com/SolaceProducts/solace-kubernetes-quickstart#step-4 ) for further details): +> Ensure each command-line session has the TILLER_NAMESPACE environment variable properly set! -Notes: - -* Providing `-i SOLACE_IMAGE_URL` is optional (see [Step 5](#step-5-load-the-message-broker-Docker-image-to-your-Docker-registry ) if using the latest Solace PubSub+ Standard edition message broker image from the Solace public Docker Hub registry -* Set the cloud provider option to `-c aws` when deploying a message broker in an OpenShift / AWS environment -* Ensure Helm runs by executing `helm version`. If not, revisit [Step 3](#step-3-optional-only-for-deployment-option-1---use-the-solace-kubernetes-quickstart-to-deploy-the-message-broker-install-the-helm-client-and-server-side-tools ), including the export statements. - -HA deployment example: +- Use one of the chart variants to create a deployment. For configuration options and delete instructions, refer to the [PubSub+ Software Event Broker Helm Chart documentation](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg/pubsubplus). +a) Create a Solace PubSub+ minimum deployment for development purposes using `pubsubplus-dev`. It requires a minimum of 1 CPU and 2 GB of memory be available to the PubSub+ pod. +```bash + # Deploy PubSub+ Standard edition, minimum footprint developer version + helm install --name my-release solacecharts/pubsubplus-dev \ + --set securityContext.enabled=false ``` -oc project solace-pubsub # adjust your project name as needed -cd ~/workspace/solace-kubernetes-quickstart/solace -../scripts/configure.sh -p -c aws -v values-examples/prod1k-persist-ha-provisionPvc.yaml -i -# Initiate the deployment -helm install . -f values.yaml -# Wait until all pods running and ready and the active message broker pod label is "active=true" -watch oc get pods --show-labels -``` - -non-HA deployment example: +b) Create a Solace PubSub+ standalone deployment, supporting 100 connections scaling using `pubsubplus`. A minimum of 2 CPUs and 4 GB of memory must be available to the PubSub+ pod. +```bash + # Deploy PubSub+ Standard edition, standalone + helm install --name my-release solacecharts/pubsubplus \ + --set securityContext.enabled=false ``` -oc project solace-pubsub # adjust your project name as needed -cd ~/workspace/solace-kubernetes-quickstart/solace -../scripts/configure.sh -p -c aws -v values-examples/prod1k-persist-noha-provisionPvc.yaml -i -# Initiate the deployment -helm install . -f values.yaml -# Wait until all pods running and ready and the active message broker pod label is "active=true" -watch oc get pods --show-labels -``` - -### Step 6: (Option 2) Deploy the message broker using the OpenShift templates included in this project -**Prerequisites:** -1. Determine your message broker disk space requirements. We recommend a minimum of 30 gigabytes of disk space. -2. Define a strong password for the 'admin' user of the message broker and then base64 encode the value. This value will be specified as a parameter when processing the message broker OpenShift template: -``` -echo -n 'strong@dminPw!' | base64 +c) Create a Solace PubSub+ HA deployment, supporting 100 connections scaling using `pubsubplus-ha`. The minimum resource requirements are 2 CPU and 4 GB of memory available to each of the three PubSub+ pods. +```bash + # Deploy PubSub+ Standard edition, HA + helm install --name my-release solacecharts/pubsubplus-ha \ + --set securityContext.enabled=false ``` -3. Switch to the templates directory: -``` -oc project solace-pubsub # adjust your project name as needed -cd ~/workspace/solace-openshift-quickstart/templates -``` - -**Deploy the message broker:** - -You can deploy the message broker in either a single-node or high-availability configuration. +

+
-Note: DOCKER_REGISTRY_URL and MESSAGEBROKER_IMAGE_TAG default to `solace/solace-pubsub-standard` and `latest`, MESSAGEBROKER_STORAGE_SIZE defaults to 30Gi. +
Instructions using Helm v3 +

-The template by default provides for a small-footprint Solace message broker deployment deployable in MiniShift. Adjust `export system_scaling_maxconnectioncount` in the template for higher scaling but ensure adequate resources are available to the pod(s). Refer to the [System Requirements in the Solace documentation](//docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Scaling-Tier-Resources.htm). +- Use one of the chart variants to create a deployment. For configuration options and delete instructions, refer to the [PubSub+ Software Event Broker Helm Chart documentation](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/tree/HelmReorg/pubsubplus). -Also note that if a deployment failed and then deleted using `oc delete -f`, ensure to delete any remaining PVCs. Failing to do so and retrying using the same deployment name will result in an already used PV volume mounted and the pod(s) may not come up. - -The template by default provides for a small-footprint Solace message broker deployment deployable in MiniShift. Adjust `export system_scaling_maxconnectioncount` in the template for higher scaling but ensure adequate resources are available to the pod(s). Refer to the [System Requirements in the Solace documentation](//docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Scaling-Tier-Resources.htm). - -* For a **Single-Node** configuration: - * Process the Solace 'Single Node' OpenShift template to deploy the message broker in a single-node configuration. Specify values for the DOCKER_REGISTRY_URL, MESSAGEBROKER_IMAGE_TAG, MESSAGEBROKER_STORAGE_SIZE, and MESSAGEBROKER_ADMIN_PASSWORD parameters: -``` -oc project solace-pubsub # adjust your project name as needed -cd ~/workspace/solace-openshift-quickstart/templates -oc process -f messagebroker_singlenode_template.yaml DEPLOYMENT_NAME=test-singlenode DOCKER_REGISTRY_URL= MESSAGEBROKER_IMAGE_TAG= MESSAGEBROKER_STORAGE_SIZE=30Gi MESSAGEBROKER_ADMIN_PASSWORD= | oc create -f - -# Wait until all pods running and ready -watch oc get statefulset,service,pods,pvc,pv +a) Create a Solace PubSub+ minimum deployment for development purposes using `pubsubplus-dev`. It requires minimum 1 CPU and 2 GB of memory available to the PubSub+ event broker pod. +```bash + # Deploy PubSub+ Standard edition, minimum footprint developer version + helm install my-release solacecharts/pubsubplus-dev \ + --set securityContext.enabled=false ``` -* For a **High-Availability** configuration: - * Process the Solace 'HA' OpenShift template to deploy the message broker in a high-availability configuration. Specify values for the DOCKER_REGISTRY_URL, MESSAGEBROKER_IMAGE_TAG, MESSAGEBROKER_STORAGE_SIZE, and MESSAGEBROKER_ADMIN_PASSWORD parameters: -``` -oc project solace-pubsub # adjust your project name as needed -cd ~/workspace/solace-openshift-quickstart/templates -oc process -f messagebroker_ha_template.yaml DEPLOYMENT_NAME=test-ha DOCKER_REGISTRY_URL= MESSAGEBROKER_IMAGE_TAG= MESSAGEBROKER_STORAGE_SIZE=30Gi MESSAGEBROKER_ADMIN_PASSWORD= | oc create -f - -# Wait until all pods running and ready -watch oc get statefulset,service,pods,pvc,pv +b) Create a Solace PubSub+ standalone deployment, supporting 100 connections scaling using `pubsubplus`. A minimum of 2 CPUs and 4 GB of memory must be available to the PubSub+ pod. +```bash + # Deploy PubSub+ Standard edition, standalone + helm install my-release solacecharts/pubsubplus \ + --set securityContext.enabled=false ``` - -## Validating the Deployment -Now you can validate your deployment from the OpenShift client shell: - -``` -[ec2-user@ip-10-0-23-198 ~]$ oc get statefulset,service,pods,pvc,pv --show-labels -NAME DESIRED CURRENT AGE LABELS -statefulsets/plucking-squid-solace 3 3 3m app=solace,chart=solace-0.3.0,heritage=Tiller,release=plucking-squid - -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS -svc/plucking-squid-solace 172.30.15.249 ae2dd15e27880... 22:30811/TCP,8080:30295/TCP,55555:30079/TCP 3m app=solace,chart=solace-0.3.0,heritage=Tiller,release=plucking-squid -svc/plucking-squid-solace-discovery None 8080/TCP 3m app=solace,chart=solace-0.3.0,heritage=Tiller,release=plucking-squid - -NAME READY STATUS RESTARTS AGE LABELS -po/plucking-squid-solace-0 1/1 Running 0 3m active=true,app=solace,controller-revision-hash=plucking-squid-solace-335123159,release=plucking-squid -po/plucking-squid-solace-1 1/1 Running 0 3m app=solace,controller-revision-hash=plucking-squid-solace-335123159,release=plucking-squid -po/plucking-squid-solace-2 1/1 Running 0 3m app=solace,controller-revision-hash=plucking-squid-solace-335123159,release=plucking-squid - -NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE LABELS -pvc/data-plucking-squid-solace-0 Bound pvc-e2e20e0f-7880-11e8-b199-06c6ba3800d0 30Gi RWO plucking-squid-standard 3m app=solace,release=plucking-squid -pvc/data-plucking-squid-solace-1 Bound pvc-e2e4379c-7880-11e8-b199-06c6ba3800d0 30Gi RWO plucking-squid-standard 3m app=solace,release=plucking-squid -pvc/data-plucking-squid-solace-2 Bound pvc-e2e6e88d-7880-11e8-b199-06c6ba3800d0 30Gi RWO plucking-squid-standard 3m app=solace,release=plucking-squid - -NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS -pv/pvc-01e8785b-74b4-11e8-ac35-0afbbfab169a 1Gi RWO Delete Bound openshift-ansible-service-broker/etcd gp2 4d failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b -pv/pvc-229cf3d4-74b4-11e8-ba4e-02b74a526708 1Gi RWO Delete Bound aws-service-broker/etcd gp2 4d failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b -pv/pvc-cf27bd8c-74b3-11e8-ac35-0afbbfab169a 10Gi RWO Delete Bound openshift-infra/metrics-cassandra-1 gp2 4d failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1c -pv/pvc-e2e20e0f-7880-11e8-b199-06c6ba3800d0 30Gi RWO Delete Bound solace-pubsub/data-plucking-squid-solace-0 plucking-squid-standard 3m failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1c -pv/pvc-e2e4379c-7880-11e8-b199-06c6ba3800d0 30Gi RWO Delete Bound solace-pubsub/data-plucking-squid-solace-1 plucking-squid-standard 3m failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a -pv/pvc-e2e6e88d-7880-11e8-b199-06c6ba3800d0 30Gi RWO Delete Bound solace-pubsub/data-plucking-squid-solace-2 plucking-squid-standard 3m failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b -[ec2-user@ip-10-0-23-198 ~]$ -[ec2-user@ip-10-0-23-198 ~]$ -[ec2-user@ip-10-0-23-198 ~]$ oc describe svc -Name: plucking-squid-solace -Namespace: solace-pubsub -Labels: app=solace - chart=solace-0.3.0 - heritage=Tiller - release=plucking-squid -Annotations: -Selector: active=true,app=solace,release=plucking-squid -Type: LoadBalancer -IP: 172.30.15.249 -LoadBalancer Ingress: ae2dd15e2788011e8b19906c6ba3800d-1889414054.eu-central-1.elb.amazonaws.com -Port: ssh 22/TCP -TargetPort: 2222/TCP -NodePort: ssh 31569/TCP -Endpoints: 10.128.2.11:2222 -Port: semp 8080/TCP -TargetPort: 8080/TCP -NodePort: semp 31260/TCP -Endpoints: 10.128.2.11:8080 -Port: smf 55555/TCP -TargetPort: 55555/TCP -NodePort: smf 32027/TCP -Endpoints: 10.128.2.11:55555 -Port: semptls 943/TCP -TargetPort: 60943/TCP -NodePort: semptls 31243/TCP -Endpoints: 10.128.2.11:60943 -Port: web 80/TCP -TargetPort: 60080/TCP -NodePort: web 32240/TCP -Endpoints: 10.128.2.11:60080 -Port: webtls 443/TCP -TargetPort: 60443/TCP -NodePort: webtls 30548/TCP -Endpoints: 10.128.2.11:60443 -Session Affinity: None -Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 5m 5m 1 service-controller Normal CreatingLoadBalancer Creating load balancer - 5m 5m 1 service-controller Normal CreatedLoadBalancer Created load balancer - - -Name: plucking-squid-solace-discovery -Namespace: solace-pubsub -Labels: app=solace - chart=solace-0.3.0 - heritage=Tiller - release=plucking-squid -Annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints=true -Selector: app=solace,release=plucking-squid -Type: ClusterIP -IP: None -Port: semp 8080/TCP -Endpoints: 10.129.0.11:8080,10.130.0.12:8080,10.131.0.9:8080 -Session Affinity: None -Events: +c) Create a Solace PubSub+ HA deployment, supporting 100 connections scaling using `pubsubplus-ha`. The minimum resource requirements are 2 CPU and 4 GB of memory available to each of the three event broker pods. +```bash + # Deploy PubSub+ Standard edition, HA + helm install my-release solacecharts/pubsubplus-ha \ + --set securityContext.enabled=false ``` +

+
-Find the **'LoadBalancer Ingress'** value listed in the service description above. This is the publicly accessible Solace Connection URI for messaging clients and management. In the example it is `ae2dd15e2788011e8b19906c6ba3800d-1889414054.eu-central-1.elb.amazonaws.com`. - -> Note: If using MiniShift an additional step is required to expose the service: `oc get --export svc plucking-squid-solace`. This will return a service definition with nodePort port numbers for each message router service. Use these port mumbers together with MiniShift's public IP address which can be obtained from the command `minishift ip`. - - -### Viewing bringup logs - -To see the deployment events, navigate to: - -* **OpenShift UI > (Your Project) > Applications > Stateful Sets > ((name)-solace) > Events** +The above options will start the deployment and write related information and notes to the screen. -You can access the log stack for individual message broker pods from the OpenShift UI, by navigating to: +> Note: If using MiniShift an additional step is required to expose the service: `oc get --export svc my-release-pubsubplus`. This will return a service definition with nodePort port numbers for each message router service. Use these port numbers together with MiniShift's public IP address which can be obtained from the command `minishift ip`. -* **OpenShift UI > (Your Project) > Applications > Stateful Sets > ((name)-solace) > Pods > ((name)-solace-(N)) > Logs** +Wait for the deployment to complete following the instructions, then you can [validate the deployment and try the management and messaging services](/docs/PubSubPlusOpenShiftDeployment.md#validating-the-deployment). -![alt text](/resources/Solace-Pod-Log-Stack.png "Message Broker Pod Log Stack") +If any issues, refer to the [Troubleshooting](https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart/blob/master/docs/PubSubPlusK8SDeployment.md#troubleshooting) section of the general PubSub+ Kubernetes Documentation - substitute any `kubectl` commands with `oc` commands. -Where (N) above is the ordinal of the Solace message broker: - * 0 - Primary message broker - * 1 - Backup message broker - * 2 - Monitor message broker +If you need to start over, follow the [steps to delete the current deployment](/docs/PubSubPlusOpenShiftDeployment.md#deleting-the-pubsub-event-broker-deployment). -## Gaining admin and ssh access to the message broker - -The external management host URI will be the Solace Connection URI associated with the load balancer generated by the message broker OpenShift template. Access will go through the load balancer service as described in the introduction and will always point to the active message broker. The default port is 22 for CLI and 8080 for SEMP/SolAdmin. - -If you deployed OpenShift in AWS, then the Solace OpenShift QuickStart will have created an EC2 Load Balancer to front the message broker / OpenShift service. The Load Balancer public DNS name can be found in the AWS EC2 console under the 'Load Balancers' section. - -To lauch Solace CLI or ssh into the individual message broker instances from the OpenShift CLI use: - -``` -# CLI access -oc exec -it XXX-XXX-solace-X cli # adjust pod name to your deployment -# shell access -oc exec -it XXX-XXX-solace-X bash # adjust pod name to your deployment -``` - -> Note for MiniShift: if using Windows you may get an error message: `Unable to use a TTY`. Install and preceed above commands with `winpty` until this is fixed in the MiniShift project. - - -You can also gain access to the Solace CLI and container shell for individual message broker instances from the OpenShift UI. A web-based terminal emulator is available from the OpenShift UI. Navigate to an individual message broker Pod using the OpenShift UI: - -* **OpenShift UI > (Your Project) > Applications > Stateful Sets > ((name)-solace) > Pods > ((name)-solace-(N)) > Terminal** - -Once you have launched the terminal emulator to the message broker pod you may access the Solace CLI by executing the following command: - -``` -/usr/sw/loads/currentload/bin/cli -A -``` - -![alt text](/resources/Solace-Primary-Pod-Terminal-CLI.png "Message Broker CLI via OpenShift UI Terminal emulator") - -See the [Solace Kubernetes Quickstart README](https://github.com/SolaceProducts/solace-kubernetes-quickstart/tree/master#gaining-admin-access-to-the-message-broker ) for more details including admin and SSH access to the individual message brokers. - -## Testing data access to the message broker - -To test data traffic though the newly created message broker instance, visit the Solace Developer Portal and select your preferred programming language to [send and receive messages](http://dev.solace.com/get-started/send-receive-messages/ ). Under each language there is a Publish/Subscribe tutorial that will help you get started. - -Note: the Host will be the Solace Connection URI. It may be necessary to [open up external access to a port](https://github.com/SolaceProducts/solace-kubernetes-quickstart/tree/master#upgradingmodifying-the-message-broker-cluster ) used by the particular messaging API if it is not already exposed. - -![alt text](/resources/solace_tutorial.png "getting started publish/subscribe") - -
- -## Deleting a deployment - -### Deleting the Solace message broker deployment - -To delete the deployment or to start over from Step 6 in a clean state: - -* If used (Option 1) Helm to deploy, execute: - -``` -helm list # will list the releases (deployments) -helm delete XXX-XXX # will delete instances related to your deployment - "plucking-squid" in the example above -``` - -* If used (Option 2) OpenShift templates to deploy, use: - -``` -cd ~/workspace/solace-openshift-quickstart/templates -oc process -f DEPLOYMENT_NAME= | oc delete -f - -``` - -**Note:** Above will not delete dynamic Persistent Volumes (PVs) and related Persistent Volume Claims (PVCs). If recreating the deployment with same name, the original volumes get mounted with existing configuration. Deleting the PVCs will also delete the PVs: - -``` -# List PVCs -oc get pvc -# Delete unneeded PVCs -oc delete pvc -``` - -To remove the project or to start over from Step 4 in a clean state, delete the project using the OpenShift console or the command line. For more details, refer to the [OpenShift Projects](https://docs.openshift.com/enterprise/3.0/dev_guide/projects.html ) documentation. - -``` -oc delete project solace-pubsub # adjust your project name as needed -``` - -### Deleting the OpenShift Container Platform deployment - -To delete your OpenShift Container Platform deployment that was set up at Step 1, first you need to detach the IAM policies from the ‘Setup Role’ (IAM) that were attached in (Part II) of Step 1. Then you also need to ensure to free up the allocated OpenShift entitlements from your subscription otherwise they will no longer be available for a subsequent deployment. - -Use this quick start's script to automate the execution of the required steps. SSH into the *ansible-configserver* then follow the commands: - -``` -# assuming solace-openshift-quickstart/scripts are still available from Step 1 -cd ~/solace-openshift-quickstart/scripts -./prepareDeleteAWSOpenShift.sh -``` - -Now the OpenShift stack delete can be initiated from the AWS CloudFormation console. - -## Special topics - -### Running the message broker in unprivileged container - -In this QuickStart the message broker gets deployed in an unprivileged container with necessary additional fine-grained [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html ) opened up that are required by the broker operation. - -To deploy the message broker in unprivileged container the followings are required and are already taken care of by the scripts: - -* A custom [OpenShift SCC](https://docs.openshift.com/container-platform/3.10/architecture/additional_concepts/authorization.html#security-context-constraints ) defining the fine grained permissions above the "restricted" SCC needs to be created and assigned to the deployment user of the project. See the [sccForUnprivilegedCont.yaml](https://github.com/SolaceProducts/solace-openshift-quickstart/blob/master/scripts/templates/sccForUnprivilegedCont.yaml ) file in this repo. -* The requested `securityContext` for the container shall be `privileged: false` -* Additionally, any privileged ports (port numbers less than 1024) used need to be reconfigured. For example, port 22 for SSH access needs to be reconfigured to e.g.: 22222. Note that this is at the pod level and the load balancer has been configured to expose SSH at port 22 at the publicly accessible Solace Connection URI. - -### Using NFS for persitent storage - -The Solace PubSub+ message broker supports NFS for persistent storage, with "root_squash" option configured on the NFS server. - -For an example using dynamic volume provisioning with NFS, use the Solace Kubernetes Helm chart `values-examples/prod1k-persist-ha-nfs.yaml` configuration file in [Step 6](#step-6-option-1-deploy-the-message-broker-using-the-solace-kubernetes-quickstart ). By default, this sample configuration is using the StorageClass "nfs" for volume claims, assuming this StorageClass is backed by an NFS server. - -The Helm (NFS Server Provisioner)[https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner ] project is an example of a dynamic NFS server provisioner. Here are the steps to get going with it: - -``` -# Create the required SCC -sudo oc apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs/deploy/kubernetes/scc.yaml -# Install the NFS helm chart, which will create all dependencies -helm install stable/nfs-server-provisioner --name nfs-test --set persistence.enabled=true,persistence.size=100Gi -# Ensure the "nfs-provisioner" service account got created -oc get serviceaccounts -# Bind the SCC to the "nfs-provisioner" service account -sudo oc adm policy add-scc-to-user nfs-provisioner -z nfs-test-nfs-server-provisioner -# Ensure the NFS server pod is up and running -oc get pod nfs-test-nfs-server-provisioner-0 -``` ## Contributing @@ -517,7 +175,7 @@ Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduc ## Authors -See the list of [contributors](https://github.com/SolaceProducts/solace-openshift-quickstart/graphs/contributors ) who participated in this project. +See the list of [contributors](//github.com/SolaceProducts/solace-kubernetes-quickstart/graphs/contributors) who participated in this project. ## License @@ -527,6 +185,6 @@ This project is licensed under the Apache License, Version 2.0. - See the [LICEN For more information about Solace technology in general please visit these resources: -* The Solace Developer Portal website at: http://dev.solace.com -* Understanding [Solace technology.](http://dev.solace.com/tech/) -* Ask the [Solace community](http://dev.solace.com/community/). \ No newline at end of file +- The Solace Developer Portal website at: [solace.dev](//solace.dev/) +- Understanding [Solace technology](//solace.com/products/platform/) +- Ask the [Solace community](//dev.solace.com/community/). diff --git a/scripts/deployHelm.sh b/scripts/deployHelm.sh deleted file mode 100755 index c45b818..0000000 --- a/scripts/deployHelm.sh +++ /dev/null @@ -1,108 +0,0 @@ -#!/bin/bash -# This script will automate the steps to deploy the Helm client and server-side components -# NOTE: Helm is a templating engine which can be used to generate OpenShift template files and execute -# the template on the OpenShift server. -# -# REQUIREMENTS: -# 1. The Helm client and server-side components must be deployed in your OpenShift environment -# in order to use the Solace 'helm' (Chart) templates -# 2. The Tiller project must be granted sufficient privileges to execute the specified OpenShift template(s). -# -# Usage: -# . ./deployHelm.sh client -# . ./deployHelm.sh server -# -TILLER_PROJECT=tiller -HELM_VERSION=2.14.0 - -function helmVersion() { - which helm &> /dev/null - if [ $? -ne 0 ]; then - echo "The helm client tool executable is not in your search path" - echo "export PATH=\$PATH:\$HOME/linux-amd64" - else - helm version - if [ $? -ne 0 ]; then - echo "There was a problem retrieving helm details. Ensure you have the following environment variables defined:" - echo " HELM_HOME=\${HOME}/.helm" - echo " TILLER_NAMESPACE=${TILLER_PROJECT}" - fi - fi -} - -function ocLogin() { - # Log the user into OpenShift if they are not already logged in - oc whoami &> /dev/null - if [ $? -ne 0 ]; then - echo "Not logged into Openshift. Now logging in." - oc login - else - echo "Already logged into OpenShift as `oc whoami`" - fi -} - -function deployHelmClient () { - # Deploy Helm client - which helm &> /dev/null - if [ $? -ne 0 ]; then - cd $HOME - curl -s "https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz" | tar xz - $HOME/linux-amd64/helm init --client-only - - echo "#############################################################" - echo "Client install completed. Ensure following environment variables are exported:" - - echo "export PATH=\$PATH:\$HOME/linux-amd64" - export PATH=$PATH:~/linux-amd64 - else - echo "Skipping Helm client installation, Helm is already installed." - echo " helm executable found in --> $(which helm)" - helm init --client-only - - echo "#############################################################" - echo "Ensure following environment variables are exported:" - fi - - echo "export HELM_HOME=\$HOME/.helm" - export HELM_HOME=$HOME/.helm - - echo "export TILLER_NAMESPACE=${TILLER_PROJECT}" - export TILLER_NAMESPACE=$TILLER_PROJECT - -} - -function deployHelmServer() { - # Deploy Helm / Tiller Server - ocLogin - - oc project $TILLER_PROJECT &> /dev/null - if [ $? -ne 0 ]; then - echo "Deploying Helm Tiller to project name: $TILLER_PROJECT" - - echo "export HELM_HOME=\$HOME/.helm" - export HELM_HOME=$HOME/.helm - - echo "export TILLER_NAMESPACE=${TILLER_PROJECT}" - export TILLER_NAMESPACE=$TILLER_PROJECT - - oc new-project $TILLER_PROJECT - oc process -f templates/deployHelmServer.yaml -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" | oc create -f - - oc rollout status deployment $TILLER_PROJECT - echo "Waiting for Tiller pod to complete deployment" - sleep 10 ; # Allow some time for Tiller server to complete deployment - helmVersion - else - echo "Helm ${TILLER_PROJECT} project already exists. Skipping its creation." - oc describe project ${TILLER_PROJECT} - helmVersion - fi -} - -if [ "$1" == "client" ]; then - deployHelmClient -elif [ "$1" == "server" ]; then - deployHelmServer -else - echo "Usage: " - echo " . ./deployHelm.sh [client | server]" -fi diff --git a/scripts/prepareProject.sh b/scripts/prepareProject.sh deleted file mode 100755 index 9ae3a7d..0000000 --- a/scripts/prepareProject.sh +++ /dev/null @@ -1,60 +0,0 @@ -#!/bin/bash -# This script preapres an OpenShift project for deploying -# the Solace message broker software. The following steps are necessary: -# 1. If using Helm (Tiller project detected) grant the necessary privileges to the Helm Tiller project so that the Tiller project -# may deploy the necessary components of the Solace message broker software in its own project -# 2. Grant the necessary OpenShift privileges to the project hosting the Solace message broker as required -# -# PREREQUISITES: -# 1. If used, Helm client and server-side components (Tiller) have been already deployed in the OpenShift environment -# -# Usage: -# sudo ./prepareProject.sh -# -if [ $# -eq 0 ]; then - echo "Usage: " - echo "./prepareProject.sh " - exit 1 -fi - -PROJECT=$1 -TILLER=tiller - -function ocLogin() { - # Log in as system:admin into OpenShift if not already logged in. This script requires a user with cluster-admin role - oc whoami &> /dev/null - if [ $? -ne 0 ]; then - echo "Not logged into Openshift. Now logging in." - oc login -u system:admin -n default - oc version - else - echo "Already logged into OpenShift as `oc whoami`" - fi -} -ocLogin - - -# Create project -oc project ${PROJECT} &> /dev/null -if [ $? -ne 0 ]; then - oc new-project ${PROJECT} - oc policy add-role-to-user admin admin -n ${PROJECT} -else - echo "Skipping project creation, project ${PROJECT} already exists..." -fi - -# If deployed, grant the Tiller project the required access to deploy the Solace message router components -if [[ "`oc get projects | grep tiller`" ]]; then - echo "Tiller project detected, adding access to the ${1} project..." - oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:$TILLER:tiller - echo -fi - -# Configure the required OpenShift Policies and SCC privileges for the operation of the Solace message router software -echo "Granting the ${1} project policies and SCC privileges for correct operation..." -oc policy add-role-to-user edit system:serviceaccount:$PROJECT:default -echo "Setting up deployment in unprivileged container:" -oc create -f templates/sccForUnprivilegedCont.yaml -oc adm policy add-scc-to-user scc-solace-in-unprivileged-container system:serviceaccount:$PROJECT:default -oc adm policy add-cluster-role-to-user storage-admin admin - diff --git a/scripts/templates/deployHelmServer.yaml b/scripts/templates/deployHelmServer.yaml deleted file mode 100644 index 90ac078..0000000 --- a/scripts/templates/deployHelmServer.yaml +++ /dev/null @@ -1,81 +0,0 @@ -kind: Template -apiVersion: v1 -objects: -- kind: ServiceAccount - apiVersion: v1 - metadata: - name: tiller - -- kind: Role - apiVersion: v1 - metadata: - name: tiller - rules: - - apiGroups: - - "" - resources: - - configmaps - verbs: - - create - - get - - list - - update - - delete - - apiGroups: - - "" - resources: - - namespaces - verbs: - - get - -- kind: RoleBinding - apiVersion: v1 - metadata: - name: tiller - roleRef: - name: tiller - namespace: ${TILLER_NAMESPACE} - subjects: - - kind: ServiceAccount - name: tiller - -- apiVersion: extensions/v1beta1 - kind: Deployment - metadata: - name: tiller - spec: - replicas: 1 - selector: - matchLabels: - app: helm - name: tiller - template: - metadata: - labels: - app: helm - name: tiller - spec: - containers: - - name: tiller - image: gcr.io/kubernetes-helm/tiller:v2.14.0 - env: - - name: TILLER_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - ports: - - name: tiller - containerPort: 44134 - readinessProbe: - httpGet: - path: /readiness - port: 44135 - livenessProbe: - httpGet: - path: /liveness - port: 44135 - serviceAccountName: tiller - -parameters: -- name: TILLER_NAMESPACE - required: true diff --git a/scripts/templates/sccForUnprivilegedCont.yaml b/scripts/templates/sccForUnprivilegedCont.yaml deleted file mode 100644 index ce5fea0..0000000 --- a/scripts/templates/sccForUnprivilegedCont.yaml +++ /dev/null @@ -1,42 +0,0 @@ -kind: SecurityContextConstraints -apiVersion: v1 -metadata: - name: scc-solace-in-unprivileged-container -allowPrivilegedContainer: false -allowedCapabilities: - - IPC_LOCK - - SYS_NICE - - SETPCAP - - MKNOD - - AUDIT_WRITE - - CHOWN - - NET_RAW - - DAC_OVERRIDE - - FOWNER - - FSETID - - KILL - - SETGID - - SETUID - - NET_BIND_SERVICE - - SYS_CHROOT - - SETFCAP -allowHostIPC: true -defaultCapabilities: - - IPC_LOCK - - SYS_NICE -runAsUser: - type: MustRunAsRange -seLinuxContext: - type: RunAsAny -fsGroup: - type: MustRunAs - ranges: - - min: 501 - max: 501 -supplementalGroups: - type: RunAsAny -users: -- my-admin-user -groups: -- my-admin-group - diff --git a/templates/messagebroker_ha_template.yaml b/templates/eventbroker_ha_template.yaml similarity index 79% rename from templates/messagebroker_ha_template.yaml rename to templates/eventbroker_ha_template.yaml index b8740a1..7b07747 100644 --- a/templates/messagebroker_ha_template.yaml +++ b/templates/eventbroker_ha_template.yaml @@ -2,42 +2,38 @@ apiVersion: v1 kind: Template metadata: - name: solace-messagebroker-ha-template + name: pubsubplus-eventbroker-ha-template annotations: - description: Deploys Solace Message Broker in an HA configuration using persistent storage + description: Deploys PubSub+ Event Broker in an HA configuration using persistent storage objects: - kind: Secret apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace-secrets" + name: "${DEPLOYMENT_NAME}-pubsubplus-secrets" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus type: Opaque data: - username_admin_password: "${MESSAGEBROKER_ADMIN_PASSWORD}" + username_admin_password: "${EVENTBROKER_ADMIN_PASSWORD}" - kind: ConfigMap apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus data: init.sh: |- # export username_admin_passwordfilepath=/mnt/disks/secrets/username_admin_password export username_admin_password=`cat /mnt/disks/secrets/username_admin_password` export username_admin_globalaccesslevel=admin export service_ssh_port='2222' - export service_webtransport_port='60080' - export service_webtransport_tlsport='60443' - export service_semp_tlsport='60943' + export service_webtransport_port='8008' + export service_webtransport_tlsport='1443' + export service_semp_tlsport='1943' export logging_debug_output=all export system_scaling_maxconnectioncount="100" # [TODO] KBARR not using correct method of finding ordinal until we bump min Kubernetes release above 1.8.1 @@ -50,7 +46,7 @@ objects: else namespace=default fi - service="${DEPLOYMENT_NAME}-solace" + service="${DEPLOYMENT_NAME}-pubsubplus" # Deal with the fact we cannot accept "-" in routre names service_name=$(echo ${service} | sed 's/-//g') export routername=$(echo $(hostname) | sed 's/-//g') @@ -97,7 +93,7 @@ objects: role="" #exclude monitor node from config-sync check if [ "${node_ordinal}" != "2" ]; then - while [ ${count} -lt ${loop_guard} ]; do + while [ ${count} -lt ${loop_guard} ]; do role_results=`/mnt/disks/solace/semp_query.sh -n admin -p ${password} -u http://localhost:8080/SEMP \ -q "" \ -v "/rpc-reply/rpc/show/redundancy/active-standby-role[text()]"` @@ -117,13 +113,13 @@ objects: sleep ${pause} done if [ ${count} -eq ${loop_guard} ]; then - echo "`date` ERROR: ${APP}-Solace Management API never came up" >&2 - exit 1 + echo "`date` ERROR: ${APP}-Broker Management API never came up" >&2 + exit 1 fi count=0 echo "`date` INFO: ${APP}-Management API is up, determined that this node's active-standby role is: ${role}" - while [ ${count} -lt ${loop_guard} ]; do + while [ ${count} -lt ${loop_guard} ]; do online_results=`/mnt/disks/solace/semp_query.sh -n admin -p ${password} -u http://localhost:8080/SEMP \ -q "" \ -v "/rpc-reply/rpc/show/redundancy/virtual-routers/${role}/status/activity[text()]"` @@ -151,14 +147,14 @@ objects: done if [ ${count} -eq ${loop_guard} ]; then echo "`date` ERROR: ${APP}-Local activity state never become Local Active or Mate Active" >&2 - exit 1 + exit 1 fi # If we need to assert master, then we need to wait for mate to reconcile if [ "${resync_step}" = "assert-master" ]; then count=0 echo "`date` INFO: ${APP}-Waiting for mate activity state to be 'Standby'" - while [ ${count} -lt ${loop_guard} ]; do + while [ ${count} -lt ${loop_guard} ]; do online_results=`/mnt/disks/solace/semp_query.sh -n admin -p ${password} -u http://localhost:8080/SEMP \ -q "" \ -v "/rpc-reply/rpc/show/redundancy/virtual-routers/${role}/status/detail/priority-reported-by-mate/summary[text()]"` @@ -176,7 +172,7 @@ objects: done if [ ${count} -eq ${loop_guard} ]; then echo "`date` ERROR: ${APP}-Mate not in sync, never reached Standby" >&2 - exit 1 + exit 1 fi fi # if assert-master # Now can issue {resync_step} command and exit. @@ -184,7 +180,7 @@ objects: -q "<${resync_step}>" /mnt/disks/solace/semp_query.sh -n admin -p ${password} -u http://localhost:8080/SEMP \ -q "<${resync_step}>default" - echo "`date` INFO: ${APP}-Solace message broker bringup complete for this node." + echo "`date` INFO: ${APP}-PubSub+ message broker bringup complete for this node." fi # if not monitor exit 0 @@ -210,12 +206,16 @@ objects: --request PATCH --data "$(cat /tmp/patch_label.json)" \ -H "Authorization: Bearer $KUBE_TOKEN" -H "Content-Type:application/json-patch+json" \ $K8S/api/v1/namespaces/$NAMESPACE/pods/$HOSTNAME ; then - echo "`date` ERROR: ${APP}-Unable to update pod label, check access from pod to K8s API or RBAC authorization" >&2 - exit 1 + # Fall back to alternative method to update label + if ! curl -sSk --output /dev/null -H "Authorization: Bearer $KUBE_TOKEN" --request PATCH --data "$(cat /tmp/patch_label.json)" \ + -H "Content-Type:application/json-patch+json" \ + https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$STATEFULSET_NAMESPACE/pods/$HOSTNAME ; then + echo "`date` ERROR: ${APP}-Unable to update pod label, check access from pod to K8s API or RBAC authorization" >&2 + exit 1 + fi fi fi } - # note that there are no re-tries here, if check fails the return not ready. APP=`basename "$0"` state_file=/tmp/activity_state @@ -223,7 +223,6 @@ objects: echo "unknown" > ${state_file} fi # HA config - version=${1} IFS='-' read -ra host_array <<< $(hostname) node_ordinal=${host_array[-1]} password=`cat /mnt/disks/secrets/username_admin_password` @@ -290,7 +289,7 @@ objects: # Creating marker - important that after initial startup pod keeps being ready to serve traffic during failover while redundancy is down echo "true" > ${redundacycheck_file} fi - + if [ "${node_ordinal}" = "2" ]; then # For monitor node just check for 3 online nodes in group; active label will always be "false" role_results=`/mnt/disks/solace/semp_query.sh -n admin -p ${password} -u http://localhost:8080/SEMP \ @@ -312,28 +311,23 @@ objects: fi # End Monitor Node health_result=`curl -s -o /dev/null -w "%{http_code}" http://localhost:5550/health-check/guaranteed-active` - + case "${health_result}" in "200") - echo "`date` INFO: ${APP}-Message Router is Active and Healthy" + echo "`date` INFO: ${APP}-Message Router is Active and Healthy" set_label "active" "true" $state_file exit 0 ;; "503") set_label "active" "false" $state_file - if (( "$version" < 7 )); then - echo "`date` INFO: ${APP}-Message Router is Healthy but not Active, this is K8S 1.6 ready" - exit 0 - else - echo "`date` INFO: ${APP}-Message Router is Healthy but not Active, further check required" - fi + echo "`date` INFO: ${APP}-Message Router is Healthy but not Active, further check required" ;; "") echo "`date` WARN: ${APP}-Unable to determine config role, failing readiness check" set_label "active" "false" $state_file exit 1 esac - + # Checking if Message Router is Standby case "${node_ordinal}" in "0") @@ -397,7 +391,7 @@ objects: u) url=$OPTARG ;; v) value_search=$OPTARG - ;; + ;; esac done shift $((OPTIND-1)) @@ -416,7 +410,7 @@ objects: fi query_response=`curl -sS -u ${name}:${password} ${url} -d "${query}"` # Validate first char of response is "<", otherwise no hope of being valid xml - if [[ ${query_response:0:1} != "<" ]] ; then + if [[ ${query_response:0:1} != "<" ]] ; then echo "no valid xml returned" exit 1 fi @@ -451,86 +445,133 @@ objects: # parameters: # type: gp2 +- kind: ServiceAccount + apiVersion: v1 + metadata: + name: "${DEPLOYMENT_NAME}-pubsubplus-sa" + labels: + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus #end gcp + +- kind: Role + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: "${DEPLOYMENT_NAME}-pubsubplus-podtagupdater" + rules: + - apiGroups: [""] # "" indicates the core API group + resources: ["pods"] + verbs: ["patch"] + +- kind: RoleBinding + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: "${DEPLOYMENT_NAME}-pubsubplus-serviceaccounts-to-podtagupdater" + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: "${DEPLOYMENT_NAME}-pubsubplus-podtagupdater" + subjects: + - kind: ServiceAccount + name: "${DEPLOYMENT_NAME}-pubsubplus-sa" + - kind: Service apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace-discovery" + name: "${DEPLOYMENT_NAME}-pubsubplus-discovery" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - - port: 8080 - name: semp + - port: 8080 + name: semp + - port: 8741 + name: ha-mate-link + - port: 8300 + name: ha-conf-sync0 + - port: 8301 + name: ha-conf-sync1 + - port: 8302 + name: ha-conf-sync2 clusterIP: None selector: - app: solace - release: "${DEPLOYMENT_NAME}" - + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + publishNotReadyAddresses: true - kind: Service apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace #end gcp + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus #end gcp spec: type: LoadBalancer ports: - - port: 22 + - port: 2222 targetPort: 2222 protocol: TCP name: ssh - port: 8080 - targetPort: + targetPort: 8080 protocol: TCP name: semp + - port: 1943 + targetPort: 1943 + protocol: TCP + name: semptls - port: 55555 - targetPort: + targetPort: 55555 protocol: TCP name: smf - port: 55003 - targetPort: + targetPort: 55003 protocol: TCP - name: smfcompr + name: smfcomp - port: 55443 - targetPort: + targetPort: 55443 protocol: TCP name: smftls - - port: 943 - targetPort: 60943 - protocol: TCP - name: semptls - - port: 80 - targetPort: 60080 + - port: 8008 + targetPort: 8008 protocol: TCP name: web - - port: 443 - targetPort: 60443 + - port: 1443 + targetPort: 1443 protocol: TCP name: webtls + - port: 5672 + targetPort: 5672 + protocol: TCP + name: amqp + - port: 1883 + targetPort: 1883 + protocol: TCP + name: mqtt + - port: 9000 + targetPort: 9000 + protocol: TCP + name: rest selector: - app: solace - release: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" active: "true" - kind: StatefulSet - apiVersion: apps/v1beta1 + apiVersion: apps/v1 metadata: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" labels: - app: solace - chart: solace-1.0.1 - release: "${DEPLOYMENT_NAME}" - heritage: Tiller + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" spec: - serviceName: "${DEPLOYMENT_NAME}-solace-discovery" + selector: + matchLabels: + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + serviceName: "${DEPLOYMENT_NAME}-pubsubplus-discovery" replicas: 3 podManagementPolicy: Parallel updateStrategy: @@ -538,12 +579,14 @@ objects: template: metadata: labels: - app: solace - release: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" spec: + serviceAccountName: "${DEPLOYMENT_NAME}-pubsubplus-sa" + terminationGracePeriodSeconds: 1200 containers: - - name: solace - image: "${DOCKER_REGISTRY_URL}:${MESSAGEBROKER_IMAGE_TAG}" + - name: pubsubplus + image: "${DOCKER_REGISTRY_URL}:${EVENTBROKER_IMAGE_TAG}" imagePullPolicy: IfNotPresent resources: requests: @@ -563,36 +606,35 @@ objects: exec: command: - /mnt/disks/solace/readiness_check.sh - - "7" securityContext: privileged: false - capabilities: - add: - - IPC_LOCK - - SYS_NICE env: - name: STATEFULSET_NAME - value: "${DEPLOYMENT_NAME}-solace" + value: "${DEPLOYMENT_NAME}-pubsubplus" - name: STATEFULSET_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - # [TODO] not using correct method of finding ordinal until we bump min Kubernetes release above 1.8.1 - # - name: STATEFULSET_ORDINAL - # valueFrom: - # fieldRef: - # fieldPath: metadata.annotations['annotationKey'] - command: - bash - "-ec" - | source /mnt/disks/solace/init.sh # not using postinstall hooks because of order dependencies - # launch config check then Solace so VCMR can provide return code + # launch config check then PubSub+ so VCMR can provide return code nohup /mnt/disks/solace/config-sync-check.sh & /usr/sbin/boot.sh + lifecycle: + preStop: + exec: + command: + - bash + - "-ec" + - | + while ! pgrep solacedaemon ; do sleep 1; done + killall solacedaemon; + while [ ! -d /usr/sw/var/db.upgrade ]; do sleep 1; done; volumeMounts: - name: config-map mountPath: /mnt/disks/solace @@ -616,35 +658,41 @@ objects: - name: data mountPath: /var/lib/solace/diags subPath: diags - # only mount when not using nfs + # only mount softAdb when not using NFS, comment it out otherwise - name: data mountPath: /usr/sw/internalSpool/softAdb - subPath: softAdb #end !nfs + subPath: softAdb ports: - containerPort: 2222 protocol: TCP - containerPort: 8080 protocol: TCP + - containerPort: 1943 + protocol: TCP - containerPort: 55555 protocol: TCP - containerPort: 55003 protocol: TCP - containerPort: 55443 protocol: TCP - - containerPort: 60943 + - containerPort: 8008 + protocol: TCP + - containerPort: 1443 + protocol: TCP + - containerPort: 5672 protocol: TCP - - containerPort: 60080 + - containerPort: 1883 protocol: TCP - - containerPort: 60443 + - containerPort: 9000 protocol: TCP volumes: - name: config-map configMap: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" defaultMode: 0755 - name: secrets secret: - secretName: "${DEPLOYMENT_NAME}-solace-secrets" + secretName: "${DEPLOYMENT_NAME}-pubsubplus-secrets" defaultMode: 0400 - name: dshm emptyDir: @@ -659,11 +707,11 @@ objects: accessModes: [ "ReadWriteOnce" ] resources: requests: - storage: "${MESSAGEBROKER_STORAGE_SIZE}" + storage: "${EVENTBROKER_STORAGE_SIZE}" parameters: - name: DEPLOYMENT_NAME - displayName: Solace Message Broker Deployment Name + displayName: PubSub+ Event Broker Deployment Name description: The prefix to use for object names generate: expression from: '[A-Z0-9]{8}' @@ -671,21 +719,21 @@ parameters: required: true - name: DOCKER_REGISTRY_URL displayName: Docker Registry URL - description: The Docker registry URL for the registry containing the Solace Message Broker docker image + description: The Docker registry URL for the registry containing the PubSub+ Event Broker docker image value: solace/solace-pubsub-standard required: true - - name: MESSAGEBROKER_IMAGE_TAG - displayName: Solace Message Broker Docker Image Tag - description: The Docker image tag for the Solace Message Broker docker image from your Docker registry + - name: EVENTBROKER_IMAGE_TAG + displayName: PubSub+ Event Broker Docker Image Tag + description: The Docker image tag for the PubSub+ Event Broker docker image from your Docker registry value: latest required: true - - name: MESSAGEBROKER_ADMIN_PASSWORD - displayName: Base64 encoded password for Solace username 'admin' - description: The Message Broker 'admin' user's password (base64 encoded). This Solace OpenShift template will create an administrative user with username 'admin' with specified password. + - name: EVENTBROKER_ADMIN_PASSWORD + displayName: Base64 encoded password for PubSub+ username 'admin' + description: The Event Broker 'admin' user's password (base64 encoded). This PubSub+ OpenShift template will create an administrative user with username 'admin' with specified password. value: "cEBzc3cwcmQ=" # password 'p@ssw0rd' required: true - - name: MESSAGEBROKER_STORAGE_SIZE - displayName: Solace Message Broker Persistent Storage Disk Size - description: The size in gigabytes for a Message Broker Pod's persistent volume (with suffix 'Gi'), example 30Gi for 30 gigabytes + - name: EVENTBROKER_STORAGE_SIZE + displayName: PubSub+ Event Broker Persistent Storage Disk Size + description: The size in gigabytes for a Event Broker Pod's persistent volume (with suffix 'Gi'), example 30Gi for 30 gigabytes value: 30Gi required: true diff --git a/templates/messagebroker_singlenode_template.yaml b/templates/eventbroker_singlenode_template.yaml similarity index 66% rename from templates/messagebroker_singlenode_template.yaml rename to templates/eventbroker_singlenode_template.yaml index e70378c..32993f8 100644 --- a/templates/messagebroker_singlenode_template.yaml +++ b/templates/eventbroker_singlenode_template.yaml @@ -2,42 +2,38 @@ apiVersion: v1 kind: Template metadata: - name: solace-messagebroker-singlenode-template + name: pubsubplus-eventbroker-singlenode-template annotations: - description: Deploys Solace Message Broker in a Single Node configuration + description: Deploys PubSub+ Event Broker in a Single Node configuration objects: - kind: Secret apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace-secrets" + name: "${DEPLOYMENT_NAME}-pubsubplus-secrets" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus type: Opaque data: - username_admin_password: "${MESSAGEBROKER_ADMIN_PASSWORD}" + username_admin_password: "${EVENTBROKER_ADMIN_PASSWORD}" - kind: ConfigMap apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus data: init.sh: |- # export username_admin_passwordfilepath=/mnt/disks/secrets/username_admin_password export username_admin_password=`cat /mnt/disks/secrets/username_admin_password` export username_admin_globalaccesslevel=admin export service_ssh_port='2222' - export service_webtransport_port='60080' - export service_webtransport_tlsport='60443' - export service_semp_tlsport='60943' + export service_webtransport_port='8008' + export service_webtransport_tlsport='1443' + export service_semp_tlsport='1943' export logging_debug_output=all export system_scaling_maxconnectioncount="100" @@ -67,12 +63,16 @@ objects: --request PATCH --data "$(cat /tmp/patch_label.json)" \ -H "Authorization: Bearer $KUBE_TOKEN" -H "Content-Type:application/json-patch+json" \ $K8S/api/v1/namespaces/$NAMESPACE/pods/$HOSTNAME ; then - echo "`date` ERROR: ${APP}-Unable to update pod label, check access from pod to K8s API or RBAC authorization" >&2 - exit 1 + # Fall back to alternative method to update label + if ! curl -sSk --output /dev/null -H "Authorization: Bearer $KUBE_TOKEN" --request PATCH --data "$(cat /tmp/patch_label.json)" \ + -H "Content-Type:application/json-patch+json" \ + https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$STATEFULSET_NAMESPACE/pods/$HOSTNAME ; then + echo "`date` ERROR: ${APP}-Unable to update pod label, check access from pod to K8s API or RBAC authorization" >&2 + exit 1 + fi fi fi } - # note that there are no re-tries here, if check fails the return not ready. APP=`basename "$0"` state_file=/tmp/activity_state @@ -125,7 +125,7 @@ objects: u) url=$OPTARG ;; v) value_search=$OPTARG - ;; + ;; esac done shift $((OPTIND-1)) @@ -144,7 +144,7 @@ objects: fi query_response=`curl -sS -u ${name}:${password} ${url} -d "${query}"` # Validate first char of response is "<", otherwise no hope of being valid xml - if [[ ${query_response:0:1} != "<" ]] ; then + if [[ ${query_response:0:1} != "<" ]] ; then echo "no valid xml returned" exit 1 fi @@ -179,86 +179,106 @@ objects: # parameters: # type: gp2 -- kind: Service +- kind: ServiceAccount apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace-discovery" + name: "${DEPLOYMENT_NAME}-pubsubplus-sa" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace - annotations: - service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" - spec: - ports: - - port: 8080 - name: semp - clusterIP: None - selector: - app: solace - release: "${DEPLOYMENT_NAME}" + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus #end gcp + +- kind: Role + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: "${DEPLOYMENT_NAME}-pubsubplus-podtagupdater" + rules: + - apiGroups: [""] # "" indicates the core API group + resources: ["pods"] + verbs: ["patch"] +- kind: RoleBinding + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: "${DEPLOYMENT_NAME}-pubsubplus-serviceaccounts-to-podtagupdater" + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: "${DEPLOYMENT_NAME}-pubsubplus-podtagupdater" + subjects: + - kind: ServiceAccount + name: "${DEPLOYMENT_NAME}-pubsubplus-sa" - kind: Service apiVersion: v1 metadata: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" labels: - heritage: Tiller - release: "${DEPLOYMENT_NAME}" - chart: solace-1.0.1 - app: solace #end gcp + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus #end gcp spec: type: LoadBalancer ports: - - port: 22 + - port: 2222 targetPort: 2222 protocol: TCP name: ssh - port: 8080 - targetPort: + targetPort: 8080 protocol: TCP name: semp + - port: 1943 + targetPort: 1943 + protocol: TCP + name: semptls - port: 55555 - targetPort: + targetPort: 55555 protocol: TCP name: smf - port: 55003 - targetPort: + targetPort: 55003 protocol: TCP - name: smfcompr + name: smfcomp - port: 55443 - targetPort: + targetPort: 55443 protocol: TCP name: smftls - - port: 943 - targetPort: 60943 - protocol: TCP - name: semptls - - port: 80 - targetPort: 60080 + - port: 8008 + targetPort: 8008 protocol: TCP name: web - - port: 443 - targetPort: 60443 + - port: 1443 + targetPort: 1443 protocol: TCP name: webtls + - port: 5672 + targetPort: 5672 + protocol: TCP + name: amqp + - port: 1883 + targetPort: 1883 + protocol: TCP + name: mqtt + - port: 9000 + targetPort: 9000 + protocol: TCP + name: rest selector: - app: solace - release: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" active: "true" - kind: StatefulSet - apiVersion: apps/v1beta1 + apiVersion: apps/v1 metadata: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" labels: - app: solace - chart: solace-1.0.1 - release: "${DEPLOYMENT_NAME}" - heritage: Tiller + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" spec: - serviceName: "${DEPLOYMENT_NAME}-solace-discovery" + selector: + matchLabels: + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" + serviceName: "${DEPLOYMENT_NAME}-pubsubplus-discovery" replicas: 1 podManagementPolicy: Parallel updateStrategy: @@ -266,12 +286,14 @@ objects: template: metadata: labels: - app: solace - release: "${DEPLOYMENT_NAME}" + app.kubernetes.io/name: pubsubplus + app.kubernetes.io/instance: "${DEPLOYMENT_NAME}" spec: + serviceAccountName: "${DEPLOYMENT_NAME}-pubsubplus-sa" + terminationGracePeriodSeconds: 1200 containers: - - name: solace - image: "${DOCKER_REGISTRY_URL}:${MESSAGEBROKER_IMAGE_TAG}" + - name: pubsubplus + image: "${DOCKER_REGISTRY_URL}:${EVENTBROKER_IMAGE_TAG}" imagePullPolicy: IfNotPresent resources: requests: @@ -291,36 +313,35 @@ objects: exec: command: - /mnt/disks/solace/readiness_check.sh - - "7" securityContext: privileged: false - capabilities: - add: - - IPC_LOCK - - SYS_NICE env: - name: STATEFULSET_NAME - value: "${DEPLOYMENT_NAME}-solace" + value: "${DEPLOYMENT_NAME}-pubsubplus" - name: STATEFULSET_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - # [TODO] not using correct method of finding ordinal until we bump min Kubernetes release above 1.8.1 - # - name: STATEFULSET_ORDINAL - # valueFrom: - # fieldRef: - # fieldPath: metadata.annotations['annotationKey'] - command: - bash - "-ec" - | source /mnt/disks/solace/init.sh # not using postinstall hooks because of order dependencies - # launch config check then Solace so VCMR can provide return code + # launch config check then PubSub+ so VCMR can provide return code nohup /mnt/disks/solace/config-sync-check.sh & /usr/sbin/boot.sh + lifecycle: + preStop: + exec: + command: + - bash + - "-ec" + - | + while ! pgrep solacedaemon ; do sleep 1; done + killall solacedaemon; + while [ ! -d /usr/sw/var/db.upgrade ]; do sleep 1; done; volumeMounts: - name: config-map mountPath: /mnt/disks/solace @@ -344,35 +365,41 @@ objects: - name: data mountPath: /var/lib/solace/diags subPath: diags - # only mount when not using nfs + # only mount softAdb when not using NFS, comment it out otherwise - name: data mountPath: /usr/sw/internalSpool/softAdb - subPath: softAdb #end !nfs + subPath: softAdb ports: - containerPort: 2222 protocol: TCP - containerPort: 8080 protocol: TCP + - containerPort: 1943 + protocol: TCP - containerPort: 55555 protocol: TCP - containerPort: 55003 protocol: TCP - containerPort: 55443 protocol: TCP - - containerPort: 60943 + - containerPort: 8008 + protocol: TCP + - containerPort: 1443 + protocol: TCP + - containerPort: 5672 protocol: TCP - - containerPort: 60080 + - containerPort: 1883 protocol: TCP - - containerPort: 60443 + - containerPort: 9000 protocol: TCP volumes: - name: config-map configMap: - name: "${DEPLOYMENT_NAME}-solace" + name: "${DEPLOYMENT_NAME}-pubsubplus" defaultMode: 0755 - name: secrets secret: - secretName: "${DEPLOYMENT_NAME}-solace-secrets" + secretName: "${DEPLOYMENT_NAME}-pubsubplus-secrets" defaultMode: 0400 - name: dshm emptyDir: @@ -387,11 +414,11 @@ objects: accessModes: [ "ReadWriteOnce" ] resources: requests: - storage: "${MESSAGEBROKER_STORAGE_SIZE}" + storage: "${EVENTBROKER_STORAGE_SIZE}" parameters: - name: DEPLOYMENT_NAME - displayName: Solace Message Broker Deployment Name + displayName: PubSub+ Event Broker Deployment Name description: The prefix to use for object names generate: expression from: '[A-Z0-9]{8}' @@ -399,21 +426,21 @@ parameters: required: true - name: DOCKER_REGISTRY_URL displayName: Docker Registry URL - description: The Docker registry URL for the registry containing the Solace Message Broker docker image + description: The Docker registry URL for the registry containing the PubSub+ Event Broker docker image value: solace/solace-pubsub-standard required: true - - name: MESSAGEBROKER_IMAGE_TAG - displayName: Solace Message Broker Docker Image Tag - description: The Docker image tag for the Solace Message Broker docker image from your Docker registry + - name: EVENTBROKER_IMAGE_TAG + displayName: PubSub+ Event Broker Docker Image Tag + description: The Docker image tag for the PubSub+ Event Broker docker image from your Docker registry value: latest required: true - - name: MESSAGEBROKER_ADMIN_PASSWORD - displayName: Base64 encoded password for Solace username 'admin' - description: The Message Broker 'admin' user's password (base64 encoded). This Solace OpenShift template will create an administrative user with username 'admin' with specified password. + - name: EVENTBROKER_ADMIN_PASSWORD + displayName: Base64 encoded password for PubSub+ username 'admin' + description: The Event Broker 'admin' user's password (base64 encoded). This PubSub+ OpenShift template will create an administrative user with username 'admin' with specified password. value: "cEBzc3cwcmQ=" # password 'p@ssw0rd' required: true - - name: MESSAGEBROKER_STORAGE_SIZE - displayName: Solace Message Broker Persistent Storage Disk Size - description: The size in gigabytes for a Message Broker Pod's persistent volume (with suffix 'Gi'), example 30Gi for 30 gigabytes + - name: EVENTBROKER_STORAGE_SIZE + displayName: PubSub+ Event Broker Persistent Storage Disk Size + description: The size in gigabytes for a Event Broker Pod's persistent volume (with suffix 'Gi'), example 30Gi for 30 gigabytes value: 30Gi required: true