This document provides detailed information for deploying Solace PubSub+ Software Event Broker on Kubernetes.
- For a hands-on quick start, refer to the Quick Start guide.
- For the
pubsubplus
Helm chart configuration options, refer to the PubSub+ Software Event Broker Helm Chart Reference.
This document is applicable to any platform provider supporting Kubernetes.
Contents:
- The Solace PubSub+ Software Event Broker
- Overview
- PubSub+ Event Broker Deployment Considerations
- Deployment Prerequisites
- Deployment steps
- Validating the Deployment
- Troubleshooting
- Modifying or upgrading a Deployment
- Re-installing a Deployment
- Deleting a Deployment
- Backing Up and Restore
The PubSub+ Software Event Broker of the Solace PubSub+ Platform efficiently streams event-driven information between applications, IoT devices and user interfaces running in the cloud, on-premises, and hybrid environments using open APIs and protocols like AMQP, JMS, MQTT, REST and WebSocket. It can be installed into a variety of public and private clouds, PaaS, and on-premises environments, and brokers in multiple locations can be linked together in an event mesh to dynamically share events across the distributed enterprise.
This document assumes a basic understanding of Kubernetes concepts.
For an example deployment diagram, check out the PubSub+ Event Broker on Google Kubernetes Engine (GKE) quickstart.
Multiple YAML templates define the PubSub+ Kubernetes deployment with several parameters as deployment options. The templates are packaged as the pubsubplus
Helm chart to enable easy customization by only specifying the non-default parameter values, without the need to edit the template files.
There are two deployment options described in this document:
- The recommended option is to use the Kubernetes Helm tool, which can also manage your deployment's lifecycle, including upgrade and delete.
- Another option is to generate a set of templates with customized values from the PubSub+ Helm chart and then use the Kubernetes native
kubectl
tool to deploy. The deployment will use the authorizations of the requesting user. However, in this case, Helm will not be able to manage your Kubernetes rollouts lifecycle.
It is also important to know that Helm is a templating tool that helps package PubSub+ Software Event Broker deployment into charts. It is most useful when first setting up broker nodes on the Kubernetes cluster. It can handle the install-update-delete lifecycle for the broker nodes deployed to the cluster. It can not be used to scale-up, scale down or apply custom configuration to an already deployed PubSub+ Software Event Broker.
The next sections will provide details on the PubSub+ Helm chart, dependencies and customization options, followed by deployment prerequisites and the actual deployment steps.
The following diagram illustrates the template organization used for the PubSub+ Deployment chart. Note that the minimum is shown in this diagram to give you some background regarding the relationships and major functions.
The StatefulSet template controls the pods of a PubSub+ Software Event Broker deployment. It also mounts the scripts from the ConfigMap and the files from the Secrets and maps the event broker data directories to a storage volume through a StorageClass, if configured. The Service template provides the event broker services at defined ports. The Service-Discovery template is only used internally, so pods in a PubSub+ event broker redundancy group can communicate with each other in an HA setting.
All the pubsubplus
chart parameters are documented in the PubSub+ Software Event Broker Helm Chart reference.
Solace PubSub+ Software Event Broker can be scaled vertically by specifying either:
solace.size
- simplified scaling along the maximum number of client connections; orsolace.systemScaling
- enables defining all scaling parameters and pod resources
Depending on the solace.redundancy
parameter, one event router pod is deployed in a single-node standalone deployment or three pods if deploying a High-Availability (HA) group.
Horizontal scaling is possible through connecting multiple deployments.
The broker nodes are scaled by the maximum number of concurrent client connections, controlled by the solace.size
chart parameter.
The broker container CPU and memory resource requirements are assigned according to the tier, and are summarized here from the Solace documentation for the possible solace.size
parameter values:
dev
: no guaranteed performance, minimum requirements: 1 CPU, 3.4 GiB memoryprod100
: up to 100 connections, minimum requirements: 2 CPU, 3.4 GiB memoryprod1k
: up to 1,000 connections, minimum requirements: 2 CPU, 6.4 GiB memoryprod10k
: up to 10,000 connections, minimum requirements: 4 CPU, 12.2 GiB memoryprod100k
: up to 100,000 connections, minimum requirements: 8 CPU, 30.3 GiB memoryprod200k
: up to 200,000 connections, minimum requirements: 12 CPU, 51.4 GiB memory
This option overrides simplified vertical scaling. It enables specifying each supported broker scaling parameter, currently:
- "maxConnections", in
solace.systemScaling.maxConnections
parameter - "maxQueueMessages", in
solace.systemScaling.maxQueueMessages
parameter - "maxSpoolUsage", in
solace.systemScaling.maxSpoolUsage
parameter
Additionally, CPU and memory must be sized and provided in solace.systemScaling.cpu
and solace.systemScaling.memory
parameters. Use the Solace online System Resource Calculator to determine CPU and memory requirements for the selected scaling parameters.
Note: beyond CPU and memory requirements, required storage size (see next section) also depends significantly on scaling. The calculator can be used to determine that as well.
Also note, that specifying maxConnections, maxQueueMessages and maxSpoolUsage on initial deployment will overwrite the broker’s default values. On the other hand, doing the same using Helm upgrade on an existing deployment will not overwrite these values on brokers configuration, but it can be used to prepare (first step) for a manual scale up through CLI where these parameters can be actually changed (second step).
One of the important parameters available to configure PubSub+ Software Event Broker HA is the podDisruptionBudget
.
This helps you control and limit the disruption to your application when its pods need to be rescheduled for upgrades, maintenance or any other reason.
This is only available when we have the PubSub+ Software Event Broker deployed in high-availability (HA) mode, that is, solace.redundancy=true
.
In an HA deployment with Primary, Backup and Monitor nodes, we require a minimum of 2 nodes to reach a quorum. The pod disruption budget defaults to a minimum of two nodes when enabled.
To enable this functionality you have to set solace.podDisruptionBudgetForHA=true
and solace.redundancy=true
.
The Kubernetes StatefulSet which controls the pods that make up a PubSub+ broker deployment in an HA redundancy group does not distinguish between PubSub+ HA node types: it assigns the same CPU and memory resources to pods hosting worker and monitoring node types, even though monitoring nodes have minimal resource requirements.
To address this, a "solace-pod-modifier" Kubernetes admission plugin is provided as part of this repo: when deployed it intercepts pod create requests and can set the lower resource requirements for broker monitoring nodes only.
Also ensure to define the Helm chart parameter solace.podModifierEnabled: true
to provide the necessary annotations to the PubSub+ broker pods, which acts as a "control switch" to enable the monitoring pod resource modification.
Refer to the Readme of the plugin for details on how to activate and use it. Note: the plugin requires Kubernetes v1.16 or later.
Note: the use of the "solace-pod-modifier" Kubernetes admission plugin is not mandatory. If it is not activated or not working then the default behavior applies: monitoring nodes will have the same resource requirements as the worker nodes. If "solace-pod-modifier" is activated later, then as long as the monitoring node pods have the correct annotations they can be deleted and the reduced resources will apply after they are recreated .
The PubSub+ deployment uses disk storage for logging, configuration, guaranteed messaging and other purposes, allocated from Kubernetes volumes.
Broker versions prior to 9.12 required separate volumes mounted for each storage functionality, making up a storage-group from individual storage-elements. Versions 9.12 and later can have a single mount storage-group that will be divided up internally, but they still support the legacy mounting of storage-elements. It is recommended to set the parameter storage.useStorageGroup=true
if using broker version 9.12 or later - do not use it on earlier versions.
If using simplified vertical scaling, set following storage size (storage.size
parameter) for the scaling tiers:
dev
: no guaranteed performance: 5GBprod100
: up to 100 connections, 7GBprod1k
: up to 1,000 connections, 14GBprod10k
: up to 10,000 connections, 18GBprod100k
: up to 100,000 connections, 30GBprod200k
: up to 200,000 connections, 34GB
If using Comprehensive vertical scaling, use the calculator to determine storage size.
Using a persistent storage is recommended, otherwise if pod-local storage is used data will be lost with the loss of a pod. The storage.persistent
parameter is set to true
by default.
The pubsubplus
chart supports allocation of new storage volumes or mounting volumes with existing data. To avoid data corruption ensure to allocate clean new volumes for new deployments.
The recommended default allocation is to use Kubernetes Storage Classes utilizing Dynamic Volume Provisioning. The pubsubplus
chart deployment will create a Persistent Volume Claim (PVC) specifying the size and the Storage Class of the requested volume and a Persistent Volume (PV) that meets the requirements will be allocated. Both the PVC and PV names will be linked to the deployment's name. When deleting the event broker pod(s) or even the entire deployment, the PVC and the allocated PV will not be deleted, so potentially complex configuration is preserved. They will be re-mounted and reused with the existing configuration when a new pod starts (controlled by the StatefulSet, automatically matched to the old pod even in an HA deployment) or deployment with the same as the old name is started. Explicitly delete a PVC if no longer needed, which will delete the corresponding PV - refer to Deleting a Deployment.
Instead of using a storage class, the pubsubplus
chart also allows you describe how to assign storage by adding your own YAML fragment in the storage.customVolumeMount
parameter. The fragment is inserted for the data
volume in the {spec.template.spec.volumes}
section of the ConfigMap. Note that in this case the storage.useStorageClass
parameter is ignored.
Followings are examples of how to specify parameter values in common use cases:
When deploying PubSub+ in an HA redundancy group, monitoring nodes have minimal storage requirements compared to working nodes. The default storage.monitorStorageSize
Helm chart value enables setting and creating smaller storage for Monitor pods hosting monitoring nodes as a pre-install hook in an HA deployment (solace.redundancy=true
), before larger storage would be automatically created. Note that this setting is effective for initial deployments only, cannot be used to upgrade an existing deployment with storage already allocated for monitoring nodes. A workaround is to mark the Monitor pod storage for delete (will not delete it immediately, only after the Monitor pod has been deleted) then follow the steps to recreate the deployment: kubectl delete pvc <monitoring-pod-pvc>
.
Set the storage.useStorageClass
parameter to use a particular storage class or leave this parameter to default undefined to allocate from your platform's "default" storage class - ensure it exists.
# Check existing storage classes
kubectl get storageclass
Create a specific storage class if no existing storage class meets your needs. Refer to your Kubernetes environment's documentation if a StorageClass needs to be created or to understand the differences if there are multiple options. Example:
# AWS fast storage class example
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
fsType: xsf
If using NFS, or generally if allocating from a defined Kubernetes Persistent Volume, specify a storageClassName
in the PV manifest as in this NFS example, then set the storage.useStorageClass
parameter to the same:
# Persistent Volume example
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
storageClassName: nfs
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
Note: NFS is currently supported for development and demo purposes. If using NFS also set the
storage.slow
parameter to 'true'.
You can to use an existing PVC with its associated PV for storage, but it must be taken into account that the deployment will try to use any existing, potentially incompatible, configuration data on that volume.
Provide this custom YAML fragment in storage.customVolumeMount
:
customVolumeMount: |
persistentVolumeClaim:
claimName: existing-pvc-name
The PubSub+ Software Event Broker Kubernetes deployment is expected to work with all types of volumes your environment supports. In this case provide the specifics on mounting it in a custom YAML fragment in storage.customVolumeMount
.
The following shows how to implement the gcePersistentDisk example; note how the portion of the pod manifest example after {spec.volumes.name}
is specified:
customVolumeMount: |
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Another example is using hostPath:
customVolumeMount: |
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
The PubSub+ Software Event Broker has been tested to work with the following, Portworx, Ceph, Cinder (Openstack), vSphere storage for Kubernetes as documented here.
However, note that for EKS and GKE, xfs
produced the best results during tests.
AKS users can opt for Local Redundant Storage (LRS)
redundancy. This is because they produce the best results
when compared with the other types available on Azure.
PubSub+ services can be exposed through one of the following Kubernetes service types by specifying the service.type
parameter:
- LoadBalancer (default) - a load balancer, typically externally accessible depending on the K8s provider.
- NodePort - maps PubSub+ services to a port on a Kubernetes node; external access depends on access to the node.
- ClusterIP - internal access only from within K8s.
Additionally, for all above service types, external access can be configured through K8s Ingress (see next section).
To support Internal load balancers, provider-specific service annotation may be added through defining the service.annotations
parameter.
The service.ports
parameter defines the services exposed. It specifies the event broker containerPort
that provides the service and the mapping to the servicePort
where the service can be accessed when using LoadBalancer or ClusterIP. Note that there is no control over which port services are mapped when using NodePort.
When using Helm to initiate a deployment, notes will be provided on the screen about how to obtain the service addresses and ports specific to your deployment - follow the "Services access" section of the notes.
A deployment is ready for service requests when there is a Solace pod that is running, 1/1
ready, and the pod's label is "active=true." The exposed pubsubplus
service will forward traffic to that active event broker node. Important: service means here Guaranteed Messaging level of Quality-of-Service (QoS) of event messages persistence. Messaging traffic will not be forwarded if service level is degraded to Direct Messages only.
The LoadBalancer
or NodePort
service types can be used to expose all services from one PubSub+ broker. Ingress may be used to enable efficient external access from a single external IP address to specific PubSub+ services, potentially provided by multiple brokers.
The following table gives an overview of how external access can be configured for PubSub+ services via Ingress.
PubSub+ service / protocol, configuration and requirements | HTTP, no TLS | HTTPS with TLS terminate at ingress | HTTPS with TLS re-encrypt at ingress | General TCP over TLS with passthrough to broker |
---|---|---|---|---|
Notes: | -- | Requires TLS config on Ingress-controller | Requires TLS config on broker AND TLS config on Ingress-controller | Requires TLS config on broker. Client must use SNI to provide target host |
WebSockets, MQTT over WebSockets | Supported | Supported | Supported | Supported (routing via SNI) |
REST | Supported with restrictions: if publishing to a Queue, only root path is supported in Ingress rule or must use rewrite target annotation. For Topics, the initial path would make it to the topic name. | Supported, see prev. note | Supported, see prev. note | Supported (routing via SNI) |
SEMP | Not recommended to expose management services without TLS | Supported with restrictions: (1) Only root path is supported in Ingress rule or must use rewrite target annotation; (2) Non-TLS access to SEMP must be enabled on broker | Supported with restrictions: only root path is supported in Ingress rule or must use rewrite target annotation | Supported (routing via SNI) |
SMF, SMF compressed, AMQP, MQTT | - | - | - | Supported (routing via SNI) |
SSH* | - | - | - | - |
*SSH has been listed here for completeness only, external exposure not recommended.
All examples assume NGINX used as ingress controller (documented here), selected because NGINX is supported by most K8s providers. For other ingress controllers refer to their respective documentation.
To deploy the NGINX Ingress Controller, refer to the Quick start in the NGINX documentation. After successful deployment get the ingress External-IP or FQDN with the following command:
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
This is the IP (or the IP address the FQDN resolves to) of the ingress where external clients shall target their request and any additional DNS-resolvable hostnames, used for name-based virtual host routing, must also be configured to resolve to this IP address. If using TLS then the host certificate Common Name (CN) and/or Subject Alternative Name (SAN) must be configured to match the respective FQDN.
For options to expose multiple services from potentially multiple brokers, review the Types of Ingress from the Kubernetes documentation.
The next examples provide Ingress manifests that can be applied using kubectl apply -f <manifest-yaml>
. Then check that an external IP address (ingress controller external IP) has been assigned to the rule/service and also that the host/external IP is ready for use as it could take a some time for the address to be populated.
kubectl get ingress
NAME CLASS HOSTS
ADDRESS PORTS AGE
example.address nginx frontend.host
20.120.69.200 80 43m
The following example configures ingress to access PubSub+ REST service. Replace <my-pubsubplus-service>
with the name of the service of your deployment (hint: the service name is similar to your pod names). The port name must match the service.ports
name in the PubSub+ values.yaml
file.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-plaintext-example
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <my-pubsubplus-service>
port:
name: tcp-rest
External requests shall be targeted to the ingress External-IP at the HTTP port (80) and the specified path.
Additional to above, this requires specifying a target virtual DNS-resolvable host (here https-example.foo.com
), which resolves to the ingress External-IP, and a tls
section. The tls
section provides the possible hosts and corresponding TLS secret that includes a private key and a certificate. The certificate must include the virtual host FQDN in its CN and/or SAN, as described above. Hint: TLS secrets can be easily created from existing files.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: https-ingress-terminated-tls-example
spec:
ingressClassName: nginx
tls:
- hosts:
- https-example.foo.com
secretName: testsecret-tls
rules:
- host: https-example.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <my-pubsubplus-service>
port:
name: tcp-rest
External requests shall be targeted to the ingress External-IP through the defined hostname (here https-example.foo.com
) at the TLS port (443) and the specified path.
This only differs from above in that the request is forwarded to a TLS-encrypted PubSub+ service port. The broker must have TLS configured but there are no specific requirements for the broker certificate as the ingress does not enforce it.
The difference in the Ingress manifest is an NGINX-specific annotation marking that the backend is using TLS, and the service target port in the last line - it refers now to a TLS backend port:
metadata:
:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
:
spec:
:
rules:
:
port:
name: tls-rest
In this case the ingress does not terminate TLS, only provides routing to the broker Pod based on the hostname provided in the SNI extension of the Client Hello at TLS connection setup. Since it will pass through TLS traffic directly to the broker as opaque data, this enables the use of ingress for any TCP-based protocol using TLS as transport.
The TLS passthrough capability must be explicitly enabled on the NGINX ingress controller, as it is off by default. This can be done by editing the ingress-nginx-controller
"Deployment" in the ingress-nginx
namespace.
- Open the controller for editing:
kubectl edit deployment ingress-nginx-controller --namespace ingress-nginx
- Search where the
nginx-ingress-controller
arguments are provided, insert--enable-ssl-passthrough
to the list and save. For more information refer to the NGINX User Guide. Also note the potential performance impact of using SSL Passthrough mentioned here.
The Ingress manifest specifies "passthrough" by adding the nginx.ingress.kubernetes.io/ssl-passthrough: "true"
annotation.
The deployed PubSub+ broker(s) must have TLS configured with a certificate that includes DNS names in CN and/or SAN, that match the host used. In the example the broker server certificate may specify the host *.broker1.bar.com
, so multiple services can be exposed from broker1
, distinguished by the host FQDN.
The protocol client must support SNI. It depends on the client if it uses the server certificate CN or SAN for host name validation. Most recent clients use SAN, for example the PubSub+ Java API requires host DNS names in the SAN when using SNI.
With above, an ingress example looks following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-passthrough-tls-example
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
ingressClassName: nginx
rules:
- host: smf.broker1.bar.com
http:
paths:
- backend:
service:
name: <my-pubsubplus-service>
port:
name: tls-smf
path: /
pathType: ImplementationSpecific
External requests shall be targeted to the ingress External-IP through the defined hostname (here smf.broker1.bar.com
) at the TLS port (443) with no path required.
This section provides more information about what is required to achieve the correct label for the pod hosting the active event broker node.
Use kubectl get pods --show-labels
to check for the status of the "active" label. In a stable deployment, one of the message routing nodes with ordinal 0 or 1 shall have the label active=true
. You can find out if there is an issue by checking events for related ERROR reported.
This label is set by the readiness_check.sh
script in pubsubplus/templates/solaceConfigMap.yaml
, triggered by the StatefulSet's readiness probe. For this to happen the followings are required:
- the Solace pods must be able to communicate with each-other at port 8080 and internal ports using the Service-Discovery service.
- the Kubernetes service account associated with the Solace pod must have sufficient rights to patch the pod's label when the active event broker is service ready
- the Solace pods must be able to communicate with the Kubernetes API at
kubernetes.default.svc.cluster.local
at port $KUBERNETES_SERVICE_PORT. You can find out the address and port by SSH into the pod.
Default deployment does not have TLS over TCP enabled to access broker services. Although the exposed service.ports
include ports for secured TCP, only the insecure ports can be used by default.
To enable accessing services over TLS a server key and certificate must be configured on the broker.
It is assumed that a provider out of scope of this document will be used to create a server key and certificate for the event broker, that meet the requirements described in the Solace Documentation. If the server key is password protected it shall be transformed to an unencrypted key, e.g.: openssl rsa -in encryedprivate.key -out unencryed.key
.
The server key and certificate must be packaged in a Kubernetes secret, for example by creating a TLS secret. Example:
kubectl create secret tls <my-tls-secret> --key="<my-server-key-file>" --cert="<my-certificate-file>"
This secret name and related parameters shall be specified when deploying the PubSub+ Helm chart:
tls:
enabled: true # set to false by default
serverCertificatesSecret: <my-tls-secret> # replace by the actual name
certFilename: # optional, default if not provided: tls.crt
certKeyFilename: # optional, default if not provided: tls.key
Note: ensure filenames are matching the files reported from running
kubectl describe secret <my-tls-secret>
.
Here is an example new deployment with TLS enabled using default certFilename
and certKeyFilename
:
helm install my-release solacecharts/pubsubplus \
--set tls.enabled=true,tls.serverCertificatesSecret=<my-tls-secret>
Important: it is not possible to update an existing deployment to enable TLS that has been created without TLS enabled, by simply using the modify deployment procedure. In this case, for the first time, certificates need to be manually loaded and set up on each broker node. After that it is possible to use helm upgrade
with a secret specified.
It is also important to note that because the TLS/SSL configuration are not included in the global backup, this configuration can not be restored.
In the event the server key or certificate need to be rotated a new Kubernetes secret must be created, which may require deleting and recreating the old secret if using the same name.
Next, if using the same secret name, the broker Pods need to be restarted, one at a time waiting to reach 1/1
availability before continuing on the next one: starting with the Monitor (ordinal -2), followed by the node in backup role with active=false
label, and finally the third node. If using a new secret name, the modify deployment procedure can be used and an automatic rolling update will follow these steps restarting the nodes one at a time.
Note: a pod restart will result in provisioning the server certificate from the secret again so it will revert back from any other server certificate that may have been provisioned on the broker through other mechanism.
The image.repository
and image.tag
parameters combined specify the PubSub+ Software Event Broker Docker image to be used for the deployment. They can either point to an image in a public or a private Docker container registry.
The default values are solace/solace-pubsub-standard/
and latest
, which is the free PubSub+ Software Event Broker Standard Edition from the public Solace Docker Hub repo. It is generally recommended to set image.tag
to a specific build for traceability purposes.
The following steps are applicable if using a private Docker container registry (e.g.: GCR, ECR or Harbor):
- Get the Solace PubSub+ event broker Docker image tar.gz archive
- Load the image into the private Docker registry
To get the PubSub+ Software Event Broker Docker image URL, go to the Solace Developer Portal and download the Solace PubSub+ Software Event Broker as a docker image or obtain your version from Solace Support.
PubSub+ Software Event Broker Standard Docker Image |
PubSub+ Software Event Broker Enterprise Evaluation Edition Docker Image |
---|---|
Free, up to 1k simultaneous connections, up to 10k messages per second |
90-day trial version, unlimited |
Download Standard docker image | Download Evaluation docker image |
To load the Solace PubSub+ Software Event Broker Docker image into a private Docker registry, follow the general steps below; for specifics, consult the documentation of the registry you are using.
- Prerequisite: local installation of Docker is required
- Login to the private registry:
sudo docker login <private-registry> ...
- First, load the image to the local docker registry:
# Options a or b depending on your Docker image source:
## Option a): If you have a local tar.gz Docker image file
sudo docker load -i <solace-pubsub-XYZ-docker>.tar.gz
## Option b): You can use the public Solace Docker image, such as from Docker Hub
sudo docker pull solace/solace-pubsub-standard:latest # or specific <TagName>
#
# Verify the image has been loaded and note the associated "IMAGE ID"
sudo docker images
- Tag the image with a name specific to the private registry and tag:
sudo docker tag <image-id> <private-registry>/<path>/<image-name>:<tag>
- Push the image to the private registry
sudo docker push <private-registry>/<path>/<image-name>:<tag>
Note that additional steps may be required if using signed images.
An additional ImagePullSecret may be required if using signed images from a private Docker registry, e.g.: Harbor.
Here is an example of creating an ImagePullSecret. Refer to your registry's documentation for the specific details of use.
kubectl create secret docker-registry <pull-secret-name> --dockerserver=<private-registry-server> \
--docker-username=<registry-user-name> --docker-password=<registry-user-password> \
--docker-email=<registry-user-email>
Then set the image.pullSecretName
chart value to <pull-secret-name>
.
The event broker container already runs in non-privileged mode.
If securityContext.enabled
is true
(default) then the securityContext.fsGroup
and securityContext.runAsUser
settings define the pod security context.
If other settings control fsGroup
and runAsUser
, e.g: when using a PodSecurityPolicy or an Openshift "restricted" SCC, securityContext.enabled
shall be set to false
or ensure specified values do not conflict with the policy settings.
Services require pod label "active" of the serving event broker.
- In a controlled environment it may be necessary to add a NetworkPolicy to enable required communication.
Using secrets for TLS server keys and certificates follows Kubernetes recommendations, however, particularly in a production environment, additional steps are required to ensure only authorized access to these secrets following Kubernetes industry best practices, including setting tight RBAC permissions and fixing possible security holes.
The deployment comes with an existing user admin
. Depending on how the installation is carried out, it should start with a random
password or an existing one. Refer here. The default admin
user has admin
CLI User Access Level. This means
an admin
user can execute all CLI commands on the event broker which also includes controlling broker-wide authentication and authorization. They can also create other admin users.
However, if there is need to set up a new CLI user, first directly access the event broker pod:
kubectl exec -it XXX-XXX-pubsubplus-<pod-ordinal> -- bash
once you have access to the Solace CLI, enter the following commands to create a new user:
solace> enable
solace# configure
solace(configure)# create username <new-user-name>
enter the following commands to set the CLI User and their access level. For a full list of all the available access levels refer to this
solace(configure/username) global-access-level <access-level>
solace(configure/username) change-password <password>
The new user will now be available for use via the CLI
At the moment, we do not support changing the default admin
user password.
If there is a need to change the password of a user other than the admin
.
Directly access the event broker pod:
kubectl exec -it XXX-XXX-pubsubplus-<pod-ordinal> -- bash
get access to the Solace CLI and enter the following commands:
solace> enable
solace# configure
solace(configure)# username <user-name>
solace(configure/username) change-password <password>
Refer to these instructions to install kubectl
if your environment does not already provide this tool or equivalent (like oc
in OpenShift).
This refers to getting your platform ready either by creating a new one or getting access to an existing one. Supported platforms include but are not restricted to:
- Amazon EKS
- Azure AKS
- Google GCP
- OpenShift
- MiniKube
- VMWare PKS
Check your platform running the kubectl get nodes
command from your command-line client.
The event broker can be deployed using Helm v3.
Note: For Helm v2 support refer to earlier versions of this quickstart.
The Helm v3 executable is available from https://github.com/helm/helm/releases . Further documentation is available from https://helm.sh/.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
As discussed in the Overview, two types of deployments will be described:
- Deployment steps using Helm, as package manager
- Alternative Deployment with generating templates for the Kubernetes
kubectl
tool
The recommended way is to make use of published pre-packaged PubSub+ charts from Solace' public repo and customizing your deployment through available chart parameters.
Add or refresh a local Solace solacecharts
repo:
# Add new "solacecharts" repo
helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-helm-quickstart/helm-charts
# Refresh if needed, e.g.: to use a recently published chart version
helm repo update solacecharts
# Install from the repo
helm install my-release solacecharts/pubsubplus
There are three Helm chart variants available with default small-size configurations:
pubsubplus-dev
- PubSub+ Software Event Broker for Developers (standalone)pubsubplus
- PubSub+ Software Event Broker standalone, supporting 100 connectionspubsubplus-ha
- PubSub+ Software Event Broker HA, supporting 100 connections
Customization options are described in the PubSub+ Software Event Broker Helm Chart reference.
Also, refer to the quick start guide for additional deployment details.
More customization options
If more customization than just using Helm parameters is required, you can create your own fork so templates can be edited:
# This creates a local directory from the published templates
helm fetch solacecharts/pubsubplus --untar
# Use the Helm chart from this directory
helm install ./pubsubplus
Note: it is encouraged to raise a GitHub issue to possibly contribute your enhancements back to the project.
This method will first generate installable Kubernetes templates from this project's Helm charts, then the templates can be installed using the Kubectl tool.
Note that later sections of this document about modifying, upgrading or deleting a Deployment using the Helm tool do not apply.
Step 1: Generate Kubernetes templates for Solace event broker deployment
-
Ensure Helm is locally installed.
-
Add or refresh a local Solace
solacecharts
repo:
# Add new "solacecharts" repo
helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-helm-quickstart/helm-charts
# Refresh if needed, e.g.: to use a recently published chart version
helm repo update solacecharts
- Generate the templates:
First, consider if any configurations are required.
If this is the case then you can add overrides as additional --set ...
parameters to the helm template
command, or use an override YAML file.
# Create local copy
helm fetch solacecharts/pubsubplus --untar
# Create location for the generated templates
mkdir generated-templates
# In one of next sample commands replace my-release to the desired release name
# a) Using all defaults:
helm template my-release --output-dir ./generated-templates ./pubsubplus
# b) Example with configuration using --set
helm template my-release --output-dir ./generated-templates \
--set solace.redundancy=true \
./pubsubplus
# c) Example with configuration using --set
helm template my-release --output-dir ./generated-templates \
-f my-values.yaml \
./pubsubplus
The generated set of templates are now available in the generated-templates
directory.
Step 2: Deploy the templates on the target system
Assumptions: kubectl
is deployed and configured to point to your Kubernetes cluster
-
Optionally, copy the
generated-templates
directory with contents if this is on a different host -
Initiate the deployment:
kubectl apply --recursive -f ./generated-templates/pubsubplus
Wait for the deployment to complete, which is then ready to use.
- To delete the deployment, execute:
kubectl delete --recursive -f ./generated-templates/pubsubplus
Now you can validate your deployment on the command line. In this example an HA configuration is deployed with pod/XXX-XXX-pubsubplus-0 being the active event broker/pod. The notation XXX-XXX is used for the unique release name, e.g: "my-release".
prompt:~$ kubectl get statefulsets,services,pods,pvc,pv
NAME READY AGE
statefulset.apps/my-release-pubsubplus 3/3 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.92.0.1 <none> 443/TCP 14d
service/my-release-pubsubplus LoadBalancer 10.92.13.40 34.67.66.30 2222:30197/TCP,8080:30343/TCP,1943:32551/TCP,55555:30826/TCP,55003:30770/TCP,55443:32583/TCP,8008:32689/TCP,1443:32460/TCP,5672:31960/TCP,1883:32112/TCP,9000:30848/TCP 13m
service/my-release-pubsubplus-discovery ClusterIP None <none> 8080/TCP,8741/TCP,8300/TCP,8301/TCP,8302/TCP 13m
NAME READY STATUS RESTARTS AGE
pod/my-release-pubsubplus-0 1/1 Running 0 13m
pod/my-release-pubsubplus-1 1/1 Running 0 13m
pod/my-release-pubsubplus-2 1/1 Running 0 13m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-my-release-pubsubplus-0 Bound pvc-6b0cd358-30c4-11ea-9379-42010a8000c7 30Gi RWO standard 13m
persistentvolumeclaim/data-my-release-pubsubplus-1 Bound pvc-6b14bc8a-30c4-11ea-9379-42010a8000c7 30Gi RWO standard 13m
persistentvolumeclaim/data-my-release-pubsubplus-2 Bound pvc-6b24b2aa-30c4-11ea-9379-42010a8000c7 30Gi RWO standard 13m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-6b0cd358-30c4-11ea-9379-42010a8000c7 30Gi RWO Delete Bound default/data-my-release-pubsubplus-0 standard 13m
persistentvolume/pvc-6b14bc8a-30c4-11ea-9379-42010a8000c7 30Gi RWO Delete Bound default/data-my-release-pubsubplus-1 standard 13m
persistentvolume/pvc-6b24b2aa-30c4-11ea-9379-42010a8000c7 30Gi RWO Delete Bound default/data-my-release-pubsubplus-2 standard 13m
prompt:~$ kubectl describe service my-release-pubsubplus
Name: my-release-pubsubplus
Namespace: test
Labels: app.kubernetes.io/instance=my-release
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=pubsubplus
helm.sh/chart=pubsubplus-1.0.0
Annotations: <none>
Selector: active=true,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=pubsubplus
Type: LoadBalancer
IP: 10.100.200.41
LoadBalancer Ingress: 34.67.66.30
Port: ssh 2222/TCP
TargetPort: 2222/TCP
NodePort: ssh 30197/TCP
Endpoints: 10.28.1.20:2222
:
:
Generally, all services including management and messaging are accessible through a Load Balancer. In the above example 34.67.66.30
is the Load Balancer's external Public IP to use.
Note: When using MiniKube, there is no integrated Load Balancer. For a workaround, execute
minikube service XXX-XXX-solace
to expose the services. Services will be accessible directly using mapped ports instead of direct port access, for which the mapping can be obtained fromkubectl describe service XXX-XX-solace
.
There are multiple management tools available. The WebUI is the recommended simplest way to administer the event broker for common tasks.
A random admin password will be generated if it has not been provided at deployment using the solace.usernameAdminPassword
parameter, refer to the the information from helm status
how to retrieve it.
Important: Every time helm install
or helm upgrade
is called a new admin password will be generated, which may break an existing deployment. Therefore ensure to always provide the password from the initial deployment as solace.usernameAdminPassword=<PASSWORD>
parameter to subsequent install
and upgrade
commands.
Use the Load Balancer's external Public IP at port 8080 to access these services.
If you are using a single event broker and are used to working with a CLI event broker console access, you can SSH into the event broker as the admin
user using the Load Balancer's external Public IP:
$ssh -p 2222 [email protected]
Solace PubSub+ Standard
Password:
Solace PubSub+ Standard Version 9.4.0.105
The Solace PubSub+ Standard is proprietary software of
Solace Corporation. By accessing the Solace PubSub+ Standard
you are agreeing to the license terms and conditions located at
//www.solace.com/license-software
Copyright 2004-2019 Solace Corporation. All rights reserved.
To purchase product support, please contact Solace at:
//dev.solace.com/contact-us/
Operating Mode: Message Routing Node
XXX-XXX-pubsubplus-0>
If you are using an HA deployment, it is better to access the CLI through the Kubernets pod and not directly via SSH.
- Loopback to SSH directly on the pod
kubectl exec -it XXX-XXX-pubsubplus-0 -- bash -c "ssh -p 2222 admin@localhost"
- Loopback to SSH on your host with a port-forward map
kubectl port-forward XXX-XXX-pubsubplus-0 62222:2222 &
ssh -p 62222 admin@localhost
This can also be mapped to individual event brokers in the deployment via port-forward:
kubectl port-forward XXX-XXX-pubsubplus-0 8081:8080 &
kubectl port-forward XXX-XXX-pubsubplus-1 8082:8080 &
kubectl port-forward XXX-XXX-pubsubplus-2 8083:8080 &
For direct access, use:
kubectl exec -it XXX-XXX-pubsubplus-<pod-ordinal> -- bash
To test data traffic though the newly created event broker instance, visit the Solace Developer Portal APIs & Protocols. Under each option there is a Publish/Subscribe tutorial that will help you get started and provide the specific default port to use.
Use the external Public IP to access the deployment. If a port required for a protocol is not opened, refer to the Modification example how to open it up.
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/
Run kubectl get statefulsets,services,pods,pvc,pv
to get an understanding of the state, then drill down to get more information on a failed resource to reveal possible Kubernetes resourcing issues, e.g.:
kubectl describe pvc <pvc-name>
Detailed logs from the currently running container in a pod:
kubectl logs XXX-XXX-pubsubplus-0 -f # use -f to follow live
It is also possible to get the logs from a previously terminated or failed container:
kubectl logs XXX-XXX-pubsubplus-0 -p
Filtering on bringup logs (helps with initial troubleshooting):
kubectl logs XXX-XXX-pubsubplus-0 | grep [.]sh
Kubernetes collects all events for a cluster in one pool. This includes events related to the PubSub+ deployment.
It is recommended to watch events when creating or upgrading a Solace deployment. Events clear after about an hour. You can query all available events:
kubectl get events -w # use -w to watch live
If pods stay in pending state and kubectl describe pods
reveals there are not enough memory or CPU resources, check the resource requirements of the targeted scaling tier of your deployment and ensure adequate node resources are available.
Pods may also stay in pending state because storage requirements cannot be met. Check kubectl get pv,pvc
. PVCs and PVs should be in bound state and if not then use kubectl describe pvc
for any issues.
Unless otherwise specified, a default storage class must be available for default PubSub+ deployment configuration.
kubectl get storageclasses
Pods stuck in CrashLoopBackoff, or Failed, or Running but not Ready "active" state, usually indicate an issue with available Kubernetes node resources or with the container OS or the event broker process start.
- Try to understand the reason following earlier hints in this section.
- Try to recreate the issue by deleting and then reinstalling the deployment - ensure to remove related PVCs if applicable as they would mount volumes with existing, possibly outdated or incompatible database - and watch the logs and events from the beginning. Look for ERROR messages preceded by information that may reveal the issue.
If no pods are listed related to your deployment check the StatefulSet for any clues:
kubectl describe statefulset my-release-pubsubplus
Your Kubernetes environment's security constraints may also impact successful deployment. Review the Security considerations section.
Use the helm upgrade
command to upgrade/modify the event broker deployment: request the required modifications to the chart in passing the new/changed parameters or creating an upgrade <values-file>
YAML file. When chaining multiple -f <values-file>
to Helm, the override priority will be given to the last (right-most) file specified.
For both version upgrade and modifications, the "RollingUpdate" strategy of the Kubernetes StatefulSet applies: pods in the StatefulSet are restarted with new values in reverse order of ordinals, which means for PubSubPlus first the monitoring node (ordinal 2), then backup (ordinal 1) and finally the primary node (ordinal 0).
For the next examples, assume a deployment has been created with some initial overrides for a development HA cluster:
helm install my-release solacecharts/pubsubplus --set solace.size=dev,solace.redundancy=true
Currently used parameter values are the default chart parameter values overlayed with value-overrides.
To get the default chart parameter values, check helm show values solacecharts/pubsubplus
.
To get the current value-overrides, execute:
$ helm get values my-release
USER-SUPPLIED VALUES:
solace:
redundancy: true
size: dev
Important: this may not show, but be aware of an additional non-default parameter:
solace:
usernameAdminPassword: jMzKoW39zz # The value is just an example
This has been generated at the initial deployment if not specified and must be used henceforth for all change requests, to keep the same. See related note in the Admin Password section.
To upgrade the version of the event broker running within a Kubernetes cluster:
- Add the new version of the event broker to your container registry, then
- Either:
- Set the new image in the Helm upgrade command, also ensure to include the original overrides:
helm upgrade my-release solacecharts/pubsubplus \
--set solace.size=dev,solace.redundancy=true,solace.usernameAdminPassword: jMzKoW39zz \
--set image.repository=<repo>/<project>/solace-pubsub-standard,image.tag=NEW.VERSION.XXXXX,image.pullPolicy=IfNotPresent
- Or create a simple
version-upgrade.yaml
file and use that to upgrade the release:
tee ./version-upgrade.yaml <<-EOF # include original and new overrides
solace:
redundancy: true
size: dev
usernameAdminPassword: jMzKoW39zz
image:
repository: <repo>/<project>/solace-pubsub-standard
tag: NEW.VERSION.XXXXX
pullPolicy: IfNotPresent
EOF
helm upgrade my-release solacecharts/pubsubplus -f version-upgrade.yaml
Note: upgrade will begin immediately, in the order of pod 2, 1 and 0 (Monitor, Backup, Primary) taken down for upgrade in an HA deployment. This will affect running event broker instances, result in potentially multiple failovers and requires connection-retries configured in the client.
Similarly, to modify deployment parameters, you need pass modified value-overrides. Passing the same value-overrides to upgrade will result in no change.
In this example we will add the AMQP encrypted (TLS) port to the loadbalancer - it is not included by default.
First look up the port number for MQTT TLS: the required port is 5671.
Next, create an update file with the additional contents:
tee ./port-update.yaml <<-EOF # :
service:
ports:
- servicePort: 5671
containerPort: 5671
protocol: TCP
name: amqptls
EOF
Now upgrade the deployment, passing the changes. This time the original --set
value-overrides are combined with the override file:
helm upgrade my-release solacecharts/pubsubplus \
--set solace.size=dev,solace.redundancy=true,solace.usernameAdminPassword: jMzKoW39zz \
--values port-update.yaml
If using persistent storage broker data will not be deleted upon helm delete
.
In this case the deployment can be reinstalled and continue from the point before the helm delete
command was executed by running helm install
again, using the same release name and parameters as the previous run. This includes explicitly providing the same admin password as before.
# Initial deployment:
helm install my-release solacecharts/pubsubplus --set solace.size=dev,solace.redundancy=true
# This will auto-generate an admin password
# Retrieve the admin password, follow instructions from the output of "helm status", section Admin credentials
# Delete this deployment
helm delete my-release
# Reinstall deployment, assuming persistent storage. Notice the admin password specified
helm install my-release solacecharts/pubsubplus --set solace.size=dev,solace.redundancy=true,solace.usernameAdminPassword=jMzKoW39zz
# Original deployment is now back up
Use Helm to delete a deployment, also called a release:
helm delete my-release
Check what has remained from the deployment:
kubectl get statefulsets,services,pods,pvc,pv
Note: Helm will not clean up PVCs and related PVs. Use
kubectl delete
to delete PVCs is associated data is no longer required.
The preferred way of backing up and restoring your deployment is by backing up and restoring the message vpns. This is because of certain limitations of the system-wide backup and restore. For example TLS/SSL configuration are not included in system-wide backup hence configurations related to it will be lost.
A detailed guide to perform backing up and restore of message vpns can be found here.