Skip to content

Commit

Permalink
OSSM-8200: adding doc for multiple control planes + scoping a mesh (o…
Browse files Browse the repository at this point in the history
  • Loading branch information
FilipB authored Oct 22, 2024
1 parent 21bd142 commit 13a6055
Show file tree
Hide file tree
Showing 3 changed files with 306 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/ossm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,6 @@ This documentation is specific to the OpenShift Service Mesh product and may dif

- [Running Red Hat OpenShift Service Mesh (OSSM) 2 and OSSM 3 side by side](./ossm-2-and-ossm-3-side-by-side/README.md)
- [Cert Manager and istio-csr Integration](./cert-manager/README.md)
- [Adding services to a service mesh](./create-mesh/README.md)
- [Installing the Sidecar](./injection/README.md)
- [Multiple Istio Control Planes in a Single Cluster](./multi-control-planes/README.md)
122 changes: 122 additions & 0 deletions docs/ossm/create-mesh/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Scoping the service mesh with DiscoverySelectors
This page describes how the control plane monitors/discovers cluster resources and how to manage its scope.

A service mesh will include a workload that:
1. Has been discovered by the control plane
1. Has been [injected with a Envoy proxy sidecar](../injection/README.md)


By default, the control plane will watch all namespaces within the cluster, meaning that:
- Each proxy instance will receive configuration for all namespaces. This includes information also about workloads that are not enrolled in the mesh.
- Any workload with the appropriate pod or namespace injection label will be injected with a proxy sidecar.

This may not be desirable in a shared cluster, and you may want to limit the scope of the service mesh to only a portion of your cluster. This is particularly important if you plan to have [multiple service meshes within the same cluster](./multi-control-planes/README.md).

### DiscoverySelectors
Discovery selectors provide a mechanism for the mesh administrator to limit the scope of a service mesh. This is done through a Kubernetes [label selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors), which defines criteria for which namespaces will be visible to the control plane. Any namespaces not matching are ignored by the control plane entirely.

> **_NOTE:_** Istiod will always open a watch to OpenShift for all namespaces. However, discovery selectors will ignore objects that are not selected very early in its processing, minimizing costs.
> **_NOTE:_** `discoverySelectors` is not a security boundary. Istiod will continue to have access to all namespaces even when you have configured your `discoverySelectors`.
#### Using DiscoverySelectors
The `discoverySelectors` field accepts an array of Kubernetes [selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements). The exact type is `[]LabelSelector`, as defined [here](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements), allowing both simple selectors and set-based selectors. These selectors apply to labels on namespaces.

You can configure each label selector for a variety of use cases, including but not limited to:

- Arbitrary label names/values, for example, all namespaces with label `istio-discovery=enabled`
- A list of namespace labels using set-based selectors which carries OR semantics, for example, all namespaces with label `istio-discovery=enabled` OR `region=us-east1`
- Inclusion and/or exclusion of namespaces, for example, all namespaces with label `istio-discovery=enabled` AND label key `app` equal to `helloworld`

#### Using Discovery Selectors to Scope of a Service Mesh
Assuming you know which namespaces to include as part of the service mesh, as a mesh administrator, you can configure `discoverySelectors` at installation time or post-installation by adding your desired discovery selectors to Istio’s MeshConfig resource. For example, you can configure Istio to discover only the namespaces that have the label `istio-discovery=enabled`.

##### Prerequisites
- The OpenShift Service Mesh operator has been installed
- An Istio CNI resource has been created
- The `istioctl` binary has been installed on your localhost

1. Create the `istio-system` system namespace:
```bash
oc create ns istio-system
```
1. Label the `istio-system` system namespace:
```bash
oc label ns istio-system istio-discovery=enabled
```
1. Prepare `istio.yaml` with `discoverySelectors` configured:
```yaml
kind: Istio
apiVersion: sailoperator.io/v1alpha1
metadata:
name: default
spec:
namespace: istio-system
values:
meshConfig:
discoverySelectors:
- matchLabels:
istio-discovery: enabled
updateStrategy:
type: InPlace
version: v1.23.0
```
1. Apply the Istio CR:
```bash
oc apply -f istio.yaml
```
1. Create first application namespace:
```bash
oc create ns app-ns-1
```
1. Create second application namespace:
```bash
oc create ns app-ns-2
```
1. Label first application namespace to be matched by defined `discoverySelectors` and enable sidecar injection:
```bash
oc label ns app-ns-1 istio-discovery=enabled istio-injection=enabled
```
1. Deploy the sleep application to the first namespaces:
```bash
oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/sleep/sleep.yaml -n app-ns-1
```
1. Deploy the sleep application to the second namespaces:
```bash
oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/sleep/sleep.yaml -n app-ns-2
```
1. Verify that you don't see any endpoints from the second namespace:
```bash
istioctl pc endpoint deploy/sleep -n app-ns-1
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.128.2.197:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.128.2.197:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.128.2.197:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.128.2.197:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.131.0.32:80 HEALTHY OK outbound|80||sleep.app-ns-1.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket HEALTHY OK sds-grpc
```
1. Label second application namespace to be matched by defined `discoverySelectors` and enable sidecar injection:
```bash
oc label ns app-ns-2 istio-discovery=enabled
```
1. Verify that after labeling second namespace it also appears on the list of discovered endpoints:
```bash
istioctl pc endpoint deploy/sleep -n app-ns-1
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.128.2.197:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.128.2.197:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.128.2.197:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.128.2.197:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.131.0.32:80 HEALTHY OK outbound|80||sleep.app-ns-1.svc.cluster.local
10.131.0.33:80 HEALTHY OK outbound|80||sleep.app-ns-2.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket HEALTHY OK sds-grpc
```
See [Multiple Istio Control Planes in a Single Cluster](../multi-control-planes/README.md) for another example of `discoverySelectors` usage.
181 changes: 181 additions & 0 deletions docs/ossm/multi-control-planes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
# Multiple Istio Control Planes in a Single Cluster
By default, the control plane will watch all namespaces within the cluster so two control planes would be conflicting each other resulting in undefined behavior.

To resolve this, Istio provides [discoverySelectors](../create-mesh/README.md#discoveryselectors) which together with control plane revisions enables you to install multiple control planes in a single cluster.

## Prerequisites
- The OpenShift Service Mesh operator has been installed
- An Istio CNI resource has been created
- The `istioctl` binary has been installed on your localhost

## Deploying multiple control planes
The cluster will host two control planes installed in two different system namespaces. The mesh application workloads will run in multiple application-specific namespaces, each namespace associated with one or the other control plane based on revision and discovery selector configurations.

1. Create the first system namespace `usergroup-1`:
```bash
oc create ns usergroup-1
```
1. Label the first system namespace:
```bash
oc label ns usergroup-1 usergroup=usergroup-1
```
1. Prepare `istio-1.yaml`:
```yaml
kind: Istio
apiVersion: sailoperator.io/v1alpha1
metadata:
name: usergroup-1
spec:
namespace: usergroup-1
values:
meshConfig:
discoverySelectors:
- matchLabels:
usergroup: usergroup-1
updateStrategy:
type: InPlace
version: v1.23.0
```
1. Create `Istio` resource:
```bash
oc apply -f istio-1.yaml
```
1. Create the second system namespace `usergroup-2`:
```bash
oc create ns usergroup-2
```
1. Label the second system namespace:
```bash
oc label ns usergroup-2 usergroup=usergroup-2
```
1. Prepare `istio-2.yaml`:
```yaml
kind: Istio
apiVersion: sailoperator.io/v1alpha1
metadata:
name: usergroup-2
spec:
namespace: usergroup-2
values:
meshConfig:
discoverySelectors:
- matchLabels:
usergroup: usergroup-2
updateStrategy:
type: InPlace
version: v1.23.0
```
1. Create `Istio` resource:
```bash
oc apply -f istio-2.yaml
```
1. Deploy a policy for workloads in the `usergroup-1` namespace to only accept mutual TLS traffic `peer-auth-1.yaml`:
```yaml
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: "usergroup-1-peerauth"
namespace: "usergroup-1"
spec:
mtls:
mode: STRICT
```
```bash
oc apply -f peer-auth-1.yaml
```
1. Deploy a policy for workloads in the `usergroup-2` namespace to only accept mutual TLS traffic `peer-auth-2.yaml`:
```yaml
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: "usergroup-2-peerauth"
namespace: "usergroup-2"
spec:
mtls:
mode: STRICT
```
```bash
oc apply -f peer-auth-2.yaml
```
1. Verify the control planes are deployed and running:
```bash
oc get pods -n usergroup-1
NAME READY STATUS RESTARTS AGE
istiod-usergroup-1-747fddfb56-xzpkj 1/1 Running 0 5m1s
oc get pods -n usergroup-2
NAME READY STATUS RESTARTS AGE
istiod-usergroup-2-5b9cbb7669-lwhgv 1/1 Running 0 3m41s
```

## Deploy application workloads per usergroup
1. Create three application namespaces:
```bash
oc create ns app-ns-1
oc create ns app-ns-2
oc create ns app-ns-3
```
1. Label each namespace to associate them with their respective control planes:
```bash
oc label ns app-ns-1 usergroup=usergroup-1 istio.io/rev=usergroup-1
oc label ns app-ns-2 usergroup=usergroup-2 istio.io/rev=usergroup-2
oc label ns app-ns-3 usergroup=usergroup-2 istio.io/rev=usergroup-2
```
1. Deploy one `sleep` and `httpbin` application per namespace:
```bash
oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/sleep/sleep.yaml -n app-ns-1
oc apply -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml -n app-ns-1
oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/sleep/sleep.yaml -n app-ns-2
oc apply -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml -n app-ns-2
oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/sleep/sleep.yaml -n app-ns-3
oc apply -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml -n app-ns-3
```
1. Wait a few seconds for the `httpbin` and `sleep` pods to be running with sidecars injected:
```bash
oc get pods -n app-ns-1
NAME READY STATUS RESTARTS AGE
httpbin-7f56dc944b-kpw2x 2/2 Running 0 2m26s
sleep-5577c64d7c-b5wd2 2/2 Running 0 91m
```
Repeat this step for other application namespaces (`app-ns-2`, `app-ns-3`).
> [!TIP]
> `oc wait deployment sleep -n app-ns-1` can be used to wait for a deployment to be ready

## Verify the application to control plane mapping
Now that the applications are deployed, you can use the `istioctl ps` command to confirm that the application workloads are managed by their respective control plane, i.e., `app-ns-1` is managed by `usergroup-1`, `app-ns-2` and `app-ns-3` are managed by `usergroup-2`:
```bash
istioctl ps -i usergroup-1
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
httpbin-7f56dc944b-kpw2x.app-ns-1 Kubernetes SYNCED (2m23s) SYNCED (2m23s) SYNCED (2m23s) SYNCED (2m23s) IGNORED istiod-usergroup-1-747fddfb56-xzpkj 1.23.0
sleep-5577c64d7c-b5wd2.app-ns-1 Kubernetes SYNCED (66s) SYNCED (66s) SYNCED (66s) SYNCED (66s) IGNORED istiod-usergroup-1-747fddfb56-xzpkj 1.23.0
```
```bash
istioctl ps -i usergroup-2
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
httpbin-7f56dc944b-g4s57.app-ns-3 Kubernetes SYNCED (2m) SYNCED (2m) SYNCED (2m) SYNCED (2m) IGNORED istiod-usergroup-2-5b9cbb7669-lwhgv 1.23.0
httpbin-7f56dc944b-rzwr5.app-ns-2 Kubernetes SYNCED (2m2s) SYNCED (2m2s) SYNCED (2m) SYNCED (2m2s) IGNORED istiod-usergroup-2-5b9cbb7669-lwhgv 1.23.0
sleep-5577c64d7c-wjnxc.app-ns-3 Kubernetes SYNCED (2m2s) SYNCED (2m2s) SYNCED (2m) SYNCED (2m2s) IGNORED istiod-usergroup-2-5b9cbb7669-lwhgv 1.23.0
sleep-5577c64d7c-xk27f.app-ns-2 Kubernetes SYNCED (2m2s) SYNCED (2m2s) SYNCED (2m) SYNCED (2m2s) IGNORED istiod-usergroup-2-5b9cbb7669-lwhgv 1.23.0
```
## Verify the application connectivity is ONLY within the respective usergroup
1. Send a request from the `sleep` pod in `app-ns-1` in `usergroup-1` to the `httpbin` service in `app-ns-2` in `usergroup-2`. The communication should fail:
```bash
oc -n app-ns-1 exec "$(oc -n app-ns-1 get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
HTTP/1.1 503 Service Unavailable
content-length: 95
content-type: text/plain
date: Wed, 16 Oct 2024 12:05:37 GMT
server: envoy
```
1. Send a request from the `sleep` pod in `app-ns-2` in `usergroup-2` to the `httpbin` service in `app-ns-3` in `usergroup-2`. The communication should work:
```bash
oc -n app-ns-2 exec "$(oc -n app-ns-2 get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
HTTP/1.1 200 OK
access-control-allow-credentials: true
access-control-allow-origin: *
content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com
content-type: text/html; charset=utf-8
date: Wed, 16 Oct 2024 12:06:30 GMT
x-envoy-upstream-service-time: 8
server: envoy
transfer-encoding: chunked
```

0 comments on commit 13a6055

Please sign in to comment.