Skip to content

Commit

Permalink
core: remove support for ceph quincy
Browse files Browse the repository at this point in the history
Given that Ceph Quincy (v17) is past end of life,
remove Quincy from the supported Ceph versions,
examples, and documentation.

Supported versions now include only Reef and Squid.

Signed-off-by: Travis Nielsen <[email protected]>
  • Loading branch information
travisn committed Oct 3, 2024
1 parent fc2ac66 commit b665d7a
Show file tree
Hide file tree
Showing 67 changed files with 236 additions and 531 deletions.
122 changes: 1 addition & 121 deletions .github/workflows/daily-nightly-jobs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -110,46 +110,6 @@ jobs:
if: always()
run: sudo rm -rf /usr/bin/yq

smoke-suite-quincy-devel:
if: github.repository == 'rook/rook'
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938 # v4.2.0
with:
fetch-depth: 0

- name: consider debugging
uses: ./.github/workflows/tmate_debug
with:
use-tmate: ${{ secrets.USE_TMATE }}

- name: setup cluster resources
uses: ./.github/workflows/integration-test-config-latest-k8s
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
kubernetes-version: "1.28.4"

- name: TestCephSmokeSuite
run: |
export DEVICE_FILTER=$(tests/scripts/github-action-helper.sh find_extra_block_dev)
SKIP_CLEANUP_POLICY=false CEPH_SUITE_VERSION="quincy-devel" go test -v -timeout 1800s -run TestCephSmokeSuite github.com/rook/rook/tests/integration
- name: collect common logs
if: always()
run: |
export LOG_DIR="/home/runner/work/rook/rook/tests/integration/_output/tests/"
export CLUSTER_NAMESPACE="smoke-ns"
export OPERATOR_NAMESPACE="smoke-ns-system"
tests/scripts/collect-logs.sh
- name: Artifact
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
if: failure()
with:
name: ceph-smoke-suite-quincy-artifact
path: /home/runner/work/rook/rook/tests/integration/_output/tests/

smoke-suite-reef-devel:
if: github.repository == 'rook/rook'
runs-on: ubuntu-22.04
Expand Down Expand Up @@ -270,46 +230,6 @@ jobs:
name: ceph-smoke-suite-master-artifact
path: /home/runner/work/rook/rook/tests/integration/_output/tests/

object-suite-quincy-devel:
if: github.repository == 'rook/rook'
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938 # v4.2.0
with:
fetch-depth: 0

- name: consider debugging
uses: ./.github/workflows/tmate_debug
with:
use-tmate: ${{ secrets.USE_TMATE }}

- name: setup cluster resources
uses: ./.github/workflows/integration-test-config-latest-k8s
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
kubernetes-version: "1.28.4"

- name: TestCephObjectSuite
run: |
export DEVICE_FILTER=$(tests/scripts/github-action-helper.sh find_extra_block_dev)
SKIP_CLEANUP_POLICY=false CEPH_SUITE_VERSION="quincy-devel" go test -v -timeout 1800s -failfast -run TestCephObjectSuite github.com/rook/rook/tests/integration
- name: collect common logs
if: always()
run: |
export LOG_DIR="/home/runner/work/rook/rook/tests/integration/_output/tests/"
export CLUSTER_NAMESPACE="object-ns"
export OPERATOR_NAMESPACE="object-ns-system"
tests/scripts/collect-logs.sh
- name: Artifact
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
if: failure()
with:
name: ceph-object-suite-quincy-artifact
path: /home/runner/work/rook/rook/tests/integration/_output/tests/

object-suite-ceph-main:
if: github.repository == 'rook/rook'
runs-on: ubuntu-22.04
Expand Down Expand Up @@ -431,49 +351,9 @@ jobs:
name: ceph-upgrade-suite-reef-artifact
path: /home/runner/work/rook/rook/tests/integration/_output/tests/

upgrade-from-quincy-stable-to-quincy-devel:
if: github.repository == 'rook/rook'
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938 # v4.2.0
with:
fetch-depth: 0

- name: consider debugging
uses: ./.github/workflows/tmate_debug
with:
use-tmate: ${{ secrets.USE_TMATE }}

- name: setup cluster resources
uses: ./.github/workflows/integration-test-config-latest-k8s
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
kubernetes-version: "1.28.4"

- name: TestCephUpgradeSuite
run: |
export DEVICE_FILTER=$(tests/scripts/github-action-helper.sh find_extra_block_dev)
go test -v -timeout 1800s -failfast -run TestCephUpgradeSuite/TestUpgradeCephToQuincyDevel github.com/rook/rook/tests/integration
- name: collect common logs
if: always()
run: |
export LOG_DIR="/home/runner/work/rook/rook/tests/integration/_output/tests/"
export CLUSTER_NAMESPACE="upgrade"
export OPERATOR_NAMESPACE="upgrade-system"
tests/scripts/collect-logs.sh
- name: Artifact
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
if: failure()
with:
name: ceph-upgrade-suite-quincy-artifact
path: /home/runner/work/rook/rook/tests/integration/_output/tests/

canary-tests:
if: github.repository == 'rook/rook'
uses: ./.github/workflows/canary-integration-test.yml
with:
ceph_images: '["quay.io/ceph/ceph:v18", "quay.io/ceph/daemon-base:latest-main-devel", "quay.io/ceph/daemon-base:latest-quincy-devel", "quay.io/ceph/daemon-base:latest-reef-devel", "quay.io/ceph/daemon-base:latest-squid-devel"]'
ceph_images: '["quay.io/ceph/ceph:v18", "quay.io/ceph/daemon-base:latest-main-devel", "quay.io/ceph/daemon-base:latest-reef-devel", "quay.io/ceph/daemon-base:latest-squid-devel"]'
secrets: inherit
14 changes: 7 additions & 7 deletions Documentation/CRDs/Cluster/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Settings can be specified at the global level to apply to the cluster as a whole
* `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.4`. For more details read the [container images section](#ceph-container-images).
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v17` will be updated each time a new Quincy build is released.
Using the `v17` tag is not recommended in production because it may lead to inconsistent versions of the image running across different nodes in the cluster.
* `allowUnsupported`: If `true`, allow an unsupported major version of the Ceph release. Currently `quincy` and `reef` are supported. Future versions such as `squid` (v19) would require this to be set to `true`. Should be set to `false` in production.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v19` will be updated each time a new Squid build is released.
Using the general `v19` tag is not recommended in production because it may lead to inconsistent versions of the image running across different nodes in the cluster.
* `allowUnsupported`: If `true`, allow an unsupported major version of the Ceph release. Currently Reef and Squid are supported. Future versions such as Tentacle (v20) would require this to be set to `true`. Should be set to `false` in production.
* `imagePullPolicy`: The image pull policy for the ceph daemon pods. Possible values are `Always`, `IfNotPresent`, and `Never`. The default is `IfNotPresent`.
* `dataDirHostPath`: The path on the host ([hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) where config and data should be stored for each of the services. If the directory does not exist, it will be created. Because this directory persists on the host, it will remain after pods are deleted. Following paths and any of their subpaths **must not be used**: `/etc/ceph`, `/rook` or `/var/log/ceph`.
* **WARNING**: For test scenarios, if you delete a cluster and start a new cluster on the same hosts, the path used by `dataDirHostPath` must be deleted. Otherwise, stale keys and other config will remain from the previous cluster and the new mons will fail to start.
Expand Down Expand Up @@ -120,10 +120,10 @@ These are general purpose Ceph container with all necessary daemons and dependen

| TAG | MEANING |
| -------------------- | --------------------------------------------------------- |
| vRELNUM | Latest release in this series (e.g., **v17** = Quincy) |
| vRELNUM.Y | Latest stable release in this stable series (e.g., v17.2) |
| vRELNUM.Y.Z | A specific release (e.g., v18.2.4) |
| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v18.2.4-20240724) |
| vRELNUM | Latest release in this series (e.g., **v19** = Squid) |
| vRELNUM.Y | Latest stable release in this stable series (e.g., v19.2) |
| vRELNUM.Y.Z | A specific release (e.g., v19.2.0) |
| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v19.2.0-20240927) |

A specific will contain a specific release of Ceph as well as security fixes from the Operating System.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In external mode, Rook will provide the configuration for the CSI driver and oth
Create the desired types of storage in the provider Ceph cluster:

* [RBD pools](https://docs.ceph.com/en/latest/rados/operations/pools/#create-a-pool)
* [CephFS filesystem](https://docs.ceph.com/en/quincy/cephfs/createfs/)
* [CephFS filesystem](https://docs.ceph.com/en/latest/cephfs/createfs/)

## Connect the external Ceph Provider cluster to the Rook consumer cluster

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ python3 create-external-cluster-resources.py --cephfs-filesystem-name <filesyste
### RGW Multisite

Pass the `--rgw-realm-name`, `--rgw-zonegroup-name` and `--rgw-zone-name` flags to create the admin ops user in a master zone, zonegroup and realm.
See the [Multisite doc](https://docs.ceph.com/en/quincy/radosgw/multisite/#configuring-a-master-zone) for creating a zone, zonegroup and realm.
See the [Multisite doc](https://docs.ceph.com/en/latest/radosgw/multisite/#configuring-a-master-zone) for creating a zone, zonegroup and realm.

```console
python3 create-external-cluster-resources.py --rbd-data-pool-name <pool_name> --format bash --rgw-endpoint <rgw_endpoint> --rgw-realm-name <rgw_realm_name>> --rgw-zonegroup-name <rgw_zonegroup_name> --rgw-zone-name <rgw_zone_name>>
Expand Down
4 changes: 1 addition & 3 deletions Documentation/CRDs/Object-Storage/ceph-object-store-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ The protocols section is divided into two parts:
In the `s3` section of the `protocols` section the following options can be configured:

* `authKeystone`: Whether S3 should also authenticated using Keystone (`true`) or not (`false`). If set to `false` the default S3 auth will be used.
* `enabled`: Whether to enable S3 (`true`) or not (`false`). The default is `true` even if the section is not listed at all! Please note that S3 should not be disabled in a [Ceph Multi Site configuration](https://docs.ceph.com/en/quincy/radosgw/multisite).
* `enabled`: Whether to enable S3 (`true`) or not (`false`). The default is `true` even if the section is not listed at all! Please note that S3 should not be disabled in a [Ceph Multi Site configuration](https://docs.ceph.com/en/latest/radosgw/multisite).

#### protocols/swift settings

Expand Down Expand Up @@ -332,9 +332,7 @@ vault kv put rook/<mybucketkey> key=$(openssl rand -base64 32) # kv engine
vault write -f transit/keys/<mybucketkey> exportable=true # transit engine
```

* TLS authentication with custom certificates between Vault and CephObjectStore RGWs are supported from ceph v16.2.6 onwards
* `tokenSecretName` can be (and often will be) the same for both kms and s3 configurations.
* `AWS-SSE:S3` requires Ceph Quincy v17.2.3 or later.

## Deleting a CephObjectStore

Expand Down
12 changes: 0 additions & 12 deletions Documentation/CRDs/ceph-nfs-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,15 +194,3 @@ the size of the cluster.
not always happen due to the Kubernetes scheduler.
* Workaround: It is safest to run only a single NFS server, but we do not limit this if it
benefits your use case.

### Ceph v17.2.1

* Ceph NFS management with the Rook mgr module enabled has a breaking regression with the Ceph
Quincy v17.2.1 release.
* Workaround: Leave Ceph's Rook orchestrator mgr module disabled. If you have enabled it, you must
disable it using the snippet below from the toolbox.

```console
ceph orch set backend ""
ceph mgr module disable rook
```
7 changes: 3 additions & 4 deletions Documentation/CRDs/specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -8892,7 +8892,7 @@ map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephNetworkType]string
networks when the &ldquo;multus&rdquo; network provider is used. This config section is not used for
other network providers.</p>
<p>Valid keys are &ldquo;public&rdquo; and &ldquo;cluster&rdquo;. Refer to Ceph networking documentation for more:
<a href="https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/">https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/</a></p>
<a href="https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/">https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/</a></p>
<p>Refer to Multus network annotation documentation for help selecting values:
<a href="https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation">https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation</a></p>
<p>Rook will make a best-effort attempt to automatically detect CIDR address ranges for given
Expand Down Expand Up @@ -9574,8 +9574,7 @@ The object store&rsquo;s advertiseEndpoint and Kubernetes service endpoint, plus
Each DNS name must be valid according RFC-1123.
If the DNS name corresponds to an endpoint with DNS wildcard support, do not include the
wildcard itself in the list of hostnames.
E.g., use &ldquo;mystore.example.com&rdquo; instead of &ldquo;*.mystore.example.com&rdquo;.
The feature is supported only for Ceph v18 and later versions.</p>
E.g., use &ldquo;mystore.example.com&rdquo; instead of &ldquo;*.mystore.example.com&rdquo;.</p>
</td>
</tr>
</tbody>
Expand Down Expand Up @@ -10169,7 +10168,7 @@ string
</td>
<td>
<em>(Optional)</em>
<p>Add capabilities for user to send request to RGW Cache API header. Documented in <a href="https://docs.ceph.com/en/quincy/radosgw/rgw-cache/#cache-api">https://docs.ceph.com/en/quincy/radosgw/rgw-cache/#cache-api</a></p>
<p>Add capabilities for user to send request to RGW Cache API header. Documented in <a href="https://docs.ceph.com/en/latest/radosgw/rgw-cache/#cache-api">https://docs.ceph.com/en/latest/radosgw/rgw-cache/#cache-api</a></p>
</td>
</tr>
<tr>
Expand Down
5 changes: 0 additions & 5 deletions Documentation/Storage-Configuration/NFS/nfs-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,6 @@ users stored in LDAP can be associated with NFS users and vice versa.
mapping from a number of sources including LDAP, Active Directory, and FreeIPA. Currently, only
LDAP has been tested.

!!! attention
The Ceph container image must have the `sssd-client` package installed to support SSSD. This
package is included in `quay.io/ceph/ceph` in v17.2.4 and newer. For older Ceph versions you may
build your own Ceph image which adds `RUN yum install sssd-client && yum clean all`.

#### SSSD configuration

SSSD requires a configuration file in order to configure its connection to the user ID mapping
Expand Down
6 changes: 1 addition & 5 deletions Documentation/Storage-Configuration/NFS/nfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,7 @@ The Ceph CLI can be used from the Rook toolbox pod to create and manage NFS expo
ensure the necessary Ceph mgr modules are enabled, if necessary, and that the Ceph orchestrator
backend is set to Rook.

#### Enable the Ceph orchestrator if necessary

* Required for Ceph v16.2.7 and below
* Optional for Ceph v16.2.8 and above
* Must be disabled for Ceph v17.2.1 due to a [Ceph regression](../../CRDs/ceph-nfs-crd.md#ceph-v1721)
#### Enable the Ceph orchestrator (optional)

```console
ceph mgr module enable rook
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ spec:
8. `http` (optional) hold the spec for an HTTP endpoint. The format of the URI would be: `http[s]://<fqdn>[:<port>][/<resource>]`
+ port defaults to: 80/443 for HTTP/S accordingly
9. `disableVerifySSL` indicates whether the RGW is going to verify the SSL certificate of the HTTP server in case HTTPS is used ("false" by default)
10. `sendCloudEvents`: (optional) send the notifications with the [CloudEvents header](https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md). Supported for Ceph Quincy (v17) or newer ("false" by default)
10. `sendCloudEvents`: (optional) send the notifications with the [CloudEvents header](https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md). ("false" by default)
11. `amqp` (optional) hold the spec for an AMQP endpoint. The format of the URI would be: `amqp[s]://[<user>:<password>@]<fqdn>[:<port>][/<vhost>]`
+ port defaults to: 5672/5671 for AMQP/S accordingly
+ user/password defaults to: guest/guest
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Rook can configure the Ceph Object Store for several different scenarios. See ea

Rook has the ability to either deploy an object store in Kubernetes or to connect to an external RGW service.
Most commonly, the object store will be configured in Kubernetes by Rook.
Alternatively see the [external section](#connect-to-an-external-object-store) to consume an existing Ceph cluster with [Rados Gateways](https://docs.ceph.com/en/quincy/radosgw/index.html) from Rook.
Alternatively see the [external section](#connect-to-an-external-object-store) to consume an existing Ceph cluster with [Rados Gateways](https://docs.ceph.com/en/latest/radosgw/index.html) from Rook.

### Create a Local Object Store with S3

Expand Down Expand Up @@ -198,7 +198,7 @@ This section contains a guide on how to configure [RGW's pool placement and stor

Object Storage API allows users to override where bucket data will be stored during bucket creation. With `<LocationConstraint>` parameter in S3 API and `X-Storage-Policy` header in SWIFT. Similarly, users can override where object data will be stored by setting `X-Amz-Storage-Class` and `X-Object-Storage-Class` during object creation.

To enable this feature, configure `poolPlacements` representing a list of possible bucket data locations.
To enable this feature, configure `poolPlacements` representing a list of possible bucket data locations.
Each `poolPlacement` must have:

* a **unique** `name` to refer to it in `<LocationConstraint>` or `X-Storage-Policy`. A placement with reserved name `default` will be used by default if no location constraint is provided.
Expand Down
12 changes: 2 additions & 10 deletions Documentation/Upgrade/ceph-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,23 +24,15 @@ until all the daemons have been updated.

## Supported Versions

Rook v1.15 supports the following Ceph versions:
Rook v1.16 supports the following Ceph versions:

* Ceph Squid v19.2.0 or newer
* Ceph Reef v18.2.0 or newer
* Ceph Quincy v17.2.0 or newer

!!! important
When an update is requested, the operator will check Ceph's status,
**if it is in `HEALTH_ERR` the operator will refuse to proceed with the upgrade.**

!!! warning
Ceph v17.2.2 has a blocking issue when running with Rook. Use v17.2.3 or newer when possible.

### CephNFS User Consideration

Ceph Quincy v17.2.1 has a potentially breaking regression with CephNFS. See the NFS documentation's
[known issue](../CRDs/ceph-nfs-crd.md#ceph-v1721) for more detail.

### Ceph Images

Official Ceph container images can be found on [Quay](https://quay.io/repository/ceph/ceph?tab=tags).
Expand Down
1 change: 1 addition & 0 deletions PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,6 @@

## Breaking Changes

- Removed support for Ceph Quincy (v17) since it has reached end of life

## Features
Loading

0 comments on commit b665d7a

Please sign in to comment.