Skip to content

Commit

Permalink
Release v1.9.3-specific changes (#1173)
Browse files Browse the repository at this point in the history
* Update changelog for v1.9.3 release (#1161)

* Update CHANGELOG-1.9.0.md for 1.9.3

* Update CHANGELOG-1.9.0.md

* Update CHANGELOG-1.9.0.md

* Update v1.9.3 changelog for powerflex defect (#1162)

* Update CHANGELOG-1.9.0.md (#1169)

* Add fixed issue for CSM v1.9.3 csm-operator (#1172)

* Update changelog for v1.9.3 release (#1161)

* Update CHANGELOG-1.9.0.md for 1.9.3

* Update CHANGELOG-1.9.0.md

* Update CHANGELOG-1.9.0.md

* Update v1.9.3 changelog for powerflex defect (#1162)

* Update CHANGELOG-1.9.0.md (#1169)

* add fixed bug

* Update CHANGELOG-1.9.0.md

---------

Co-authored-by: Akshay Saini <[email protected]>

---------

Co-authored-by: Akshay Saini <[email protected]>
  • Loading branch information
jooseppi-luna and AkshaySainiDell authored Mar 1, 2024
1 parent 7e4b5bd commit c4b6219
Showing 1 changed file with 44 additions and 11 deletions.
55 changes: 44 additions & 11 deletions CHANGELOG/CHANGELOG-1.9.0.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,60 @@
<!--toc-->
- [v1.9.2](#v192)
- [Changelog since v1.9.1](#changelog-since-v191)
- [v1.9.3](#v193)
- [Changelog since v1.9.2](#changelog-since-v192)
- [Known Issues](#known-issues)
- [Changes by Kind](#changes-by-kind)
- [Features](#features)
- [Bugs](#bugs)
- [v1.9.1](#v191)
- [Changelog since v1.9.0](#changelog-since-v190)
- [v1.9.2](#v192)
- [Changelog since v1.9.1](#changelog-since-v191)
- [Known Issues](#known-issues-1)
- [Changes by Kind](#changes-by-kind-1)
- [Bugs](#bugs-1)
- [v1.9.0](#v190)
- [Changelog since v1.8.0](#changelog-since-v180)
- [v1.9.1](#v191)
- [Changelog since v1.9.0](#changelog-since-v190)
- [Known Issues](#known-issues-2)
- [Changes by Kind](#changes-by-kind-2)
- [Deprecation](#deprecation)
- [Features](#features)
- [Bugs](#bugs-2)
- [v1.9.0](#v190)
- [Changelog since v1.8.0](#changelog-since-v180)
- [Known Issues](#known-issues-3)
- [Changes by Kind](#changes-by-kind-3)
- [Deprecation](#deprecation)
- [Features](#features-1)
- [Bugs](#bugs-3)

# v1.9.3

## Changelog since v1.9.2

## Known Issues

- The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
- When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the `karavi-topology-tls`, `otel-collector-tls`, and `cert-manager-webhook-ca` secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with `kubectl get secrets -A`, and any unwanted secrets can be deleted with `kubectl delete secret -n <secret-namespace> <secret-name>`

## Changes by Kind

### Features

- Automatically create certificates when deploying observability with csm-operator. ([#1158](https://github.com/dell/csm/issues/1158))

### Bugs
- CSM object stays in success state when all CSI Powerflex pods are failing due to bad secret credentials. ([#1156](https://github.com/dell/csm/issues/1156))
- If Authorization Proxy Server is installed in an alternate namespace by CSM Operator, the deployment fails. ([#1157](https://github.com/dell/csm/issues/1157))
- CSM status is not always accurate when Observability is deployed by CSM Operator without all components enabled. ([#1159](https://github.com/dell/csm/issues/1159))
- CSI driver changes to facilitate SDC brownfield deployments. ([#1152](https://github.com/dell/csm/issues/1152))
- CSM object occasionally stays in failed state when app-mobility is successfully deployed with csm-operator. ([#1171](https://github.com/dell/csm/issues/1171))

# v1.9.2

## Changelog since v1.9.1

## Known Issues

- The status field of a csm object as deployed by CSM Operator may, in limited cases, display a "Failed" status for a successful deployment. As a workaround, the deployment is still usable as long as all pods are running/healthy.
- The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
- The status calculation done for the csm object associated with the Authorization Proxy Server when deployed with CSM Operator assumes that the proxy server will be deployed in the "authorization" namespace. If a different namespace is used, the status will stay in the failed state, even though the deployment is healthy. As a workaround, we recommend using the "authorization" namespace for the proxy server. If this is not possible, the health of the deployment can be verified by checking the status of all the pods rather than by checking the status field.
- When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the `karavi-topology-tls`, `otel-collector-tls`, and `cert-manager-webhook-ca` secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with `kubectl get secrets -A`, and any unwanted secrets can be deleted with `kubectl delete secret -n <secret-namespace> <secret-name>`
- When PowerFlex CSI driver is deployed on a host that has SDC already installed or on a host that does not support automatic SDC installation (non CoreOS, non RHEL), the SDC container is unable to detect existing scini driver. As a result, the powerflex-node pod is stuck in `Init:CrashLoopBackOff` state.

## Changes by Kind

Expand All @@ -50,7 +80,8 @@
- For CSM Operator released in CSM v1.9.1, the authorization proxy server csm object status will always be failed, even when it succeeds. This is because the operator is looking for a daemonset status when the authorization proxy server deployment does not have a daemonset. As a workaround, the module is still usable as long as all the pods are running/healthy.
- For CSM Operator released in CSM v1.9.1, an install of csi-powerscale with observability will always be marked as failed in the csm object status, even when it succeeds. This is because the operator is looking for a legacy name of isilon in the status check. As a workaround, the module is still usable as long as all the pods are running/healthy.
- For csm objects created by the CSM Operator, the CSMVersion label value is v1.8.0 when it should be v1.9.1. As a workaround, the CSM version can be double-checked by checking the operator version -- v1.4.1 operator corresponds to CSM v1.9.1.
- The status field of a csm object as deployed by CSM Operator may, in limited cases, display a "Failed" status for a successful deployment. As a workaround, the deployment is still usable as long as all pods are running/healthy.
- The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
- When PowerFlex CSI driver is deployed on a host that has SDC already installed or on a host that does not support automatic SDC installation (non CoreOS, non RHEL), the SDC container is unable to detect existing scini driver. As a result, the powerflex-node pod is stuck in `Init:CrashLoopBackOff` state.

## Changes by Kind

Expand All @@ -71,7 +102,9 @@
- For CSM PowerMax, automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" on PowerMax 10.1 arrays. As a workaround, create the SRDF Group and add it to the storage class.
- For CSM Operator released in CSM v1.9.0, a driver install will rarely (~2% of the time) have a csm object stuck in a failed state for over an hour even though the deployment succeeds. This is due to a race condition in the status update logic.
- For csm objects created by the CSM Operator, the CSMVersion label value is v1.8.0 when it should be v1.9.0. As a workaround, the CSM version can be double-checked by checking the operator version -- v1.4.0 operator corresponds to CSM v1.9.0.
- The status field of a csm object as deployed by CSM Operator may, in limited cases, display a "Failed" status for a successful deployment. As a workaround, the deployment is still usable as long as all pods are running/healthy.
- The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
- When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the `karavi-topology-tls`, `otel-collector-tls`, and `cert-manager-webhook-ca` secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with `kubectl get secrets -A`, and any unwanted secrets can be deleted with `kubectl delete secret -n <secret-namespace> <secret-name>`
- When PowerFlex CSI driver is deployed on a host that has SDC already installed or on a host that does not support automatic SDC installation (non CoreOS, non RHEL), the SDC container is unable to detect existing scini driver. As a result, the powerflex-node pod is stuck in `Init:CrashLoopBackOff` state.

## Changes by Kind

Expand Down

0 comments on commit c4b6219

Please sign in to comment.