Releases: piraeusdatastore/piraeus-operator
v1.4.0
This release contains some new features relating to setting properties on the Linstor Controller itself (hence the minor version bump).
Added
- Additional environment variables and Linstor properties can now be set in the
LinstorController
andLinstorSatelliteSet
CRDs. - Set node name variable for Controller Pods, enabling k8s-await-election to correctly set up the endpoint for
hairpin mode.
Fixed
- Update the network address of controller pods if they diverged between Linstor and Kubernetes. This can happen after
a node restart, where a pod is recreated with the same name but different IP address.
v1.3.1
This release mainly updates the available images.
- The newer LINSTOR components contain a number of important bug fixes, especially in regard to automatically creating tie breaker resources.
- The new DRBD releases fixes a bug in regards to live migration of resources
- The HA Controller was updated to no longer crash whenever a kubernetes watch timed out.
Added
- New guide on host preparation here.
Changed
- Default image updated:
operator.satelliteSet.kernelModuleInjectionImage
:quay.io/piraeusdatastore/drbd9-bionic:v9.0.27
operator.satelliteSet.satelliteImage
:quay.io/piraeusdatastore/piraeus-server:v1.11.1
operator.controller.controllerImage
:quay.io/piraeusdatastore/piraeus-server:v1.11.1
haController.image
:quay.io/piraeusdatastore/piraeus-ha-controller:v0.1.3
pv-hostpath
:chownerImage
:quay.io/centos/centos:8
v1.3.0
The stand-out addition in this release is the first version of our new Piraeus High Availability Controller. It is deployed by default, but will only activate if you opt-in with your stateful workloads. More information is available in the optional components page
Other then the usual image updates, we also updated the API Version of our CRDs. Kubernetes deprecated the older versions and marked them for removal in 1.22+.
Added
- New component:
haController
will deploy the Piraeus High Availability Controller. - Enable strict checking of DRBD parameter to disable usermode helper in container environments.
- Override the image used in "chown" jobs in the
pv-hostpath
chart by using--set chownerImage=<my-image>
.
Changed
- Updated
operator-sdk
to v0.19.4 - Set CSI component timeout to 1 minute to reduce the number of retries in the CSI driver
- Default images updated:
operator.controller.controllerImage
:quay.io/piraeusdatastore/piraeus-server:v1.11.0
operator.satelliteSet.satelliteImage
:quay.io/piraeusdatastore/piraeus-server:v1.11.0
operator.satelliteSet.kernelModuleInjectionImage
:quay.io/piraeusdatastore/drbd9-bionic:v9.0.26
csi.pluginImage
:quay.io/piraeusdatastore/piraeus-csi:v0.11.0
Fixed
- Fixed Helm warnings when setting
csi.controllerAffinity
,operator.controller.affinity
and
operator.satelliteSet.storagePools
.
v1.2.0
The newest release brings updated default images, better integration with pod security policies and, perhaps most important of all: the ability to use helm upgrade
to update to a new Piraeus version and change settings. See the upgrade guide
Added
storagePools
can now also set up devices similar toautomaticStorageType
, but with more fine grained control.
See the updated storage guide- New Helm options to disable creation of LinstorController and LinstorSatelliteSet resource
operator.controller.enabled
andoperator.satelliteSet.enabled
. - New Helm option to override the generated controller endpoint:
controllerEndpoint
- Allow overriding the default
securityContext
on a component basis:etcd.podsecuritycontext
sets the securityContext of etcd podsstork.podsecuritycontext
sets the securityContext of stork plugin and scheduler podscsi-snapshotter.podsecuritycontext
sets the securityContext of the CSI-Snapshotter podsoperator.podsecuritycontext
sets the securityContext of the operator pods
- Example settings for openshift
- LINSTOR controller runs with additional GID 1000, to ensure write access to log directory
Changed
- Fixed a bug in
pv-hostpath
where permissions on the created directory are not applied on all nodes. - Volumes created by
pv-hostpath
are now group writable. This makes them easier to integrate withfsGroup
settings. - Default value for affinity on LINSTOR controller and CSI controller changed. The new default is to distribute the pods
across all available nodes. - Default value for tolerations for etcd pods changed. They are now able to run on master nodes.
- Updates to LinstorController, LinstorSatelliteSet and LinstorCSIDriver are now propagated across all created resources
- Updated default images:
- csi sidecar containers updated (compatible with Kubernetes v1.17+)
- LINSTOR 1.10.0
- LINSTOR CSI 0.10.0
Deprecation
- Using
automaticStorageType
is deprecated. Use thestoragePools
values instead.
v1.1.0
With release v1.1.0 we worked on improving existing features and completing our work to make LINSTOR itself resilient to node failure.
Breaking (PLEASE READ!)
- The LINSTOR controller image given in
operator.controller.controllerImage
has to have its entrypoint set tok8s-await-election v0.2.0
or newer. Learn more in the upgrade guide.
Added
- LINSTOR controller can be started with multiple replicas. See
operator.controller.replicas
.
NOTE: This requires support from the container. You needpiraeus-server:v1.8.0
or newer. - The
pv-hostpath
helper chart automatically sets up permissions for non-root etcd containers. - Disable securityContext enforcement by setting
global.setSecurityContext=false
. - Add cluster roles to work with OpenShift's SCC system.
- Control volume placement and accessibility by using CSIs Topology feature. Controlled by setting
csi.enableTopology
. - All pods use a dedicated service account to allow for fine-grained permission control.
- The new helm section
psp.*
can automatically configure the ServiceAccount
of all components to use the appropriate PSP roles.
Changed
- Default values:
operator.controller.controllerImage
:quay.io/piraeusdatastore/piraeus-server:v1.9.0
operator.satelliteSet.satelliteImage
:quay.io/piraeusdatastore/piraeus-server:v1.9.0
operator.satelliteSet.kernelModuleInjectionImage
:quay.io/piraeusdatastore/drbd9-bionic:v9.0.25
stork.storkImage
:docker.io/openstorage/stork:2.5.0
- linstor-controller no longer starts in a privileged container.
Removed
- legacy CRDs (LinstorControllerSet, LinstorNodeSet) have been removed.
v1alpha
CRD versions have been removed.- default pull secret
drbdiocred
removed. To keep using it, use--set drbdRepoCred=drbdiocred
.
v1.0.0
We finally released v1.0.0, our first stable release!
While we tried to stay compatible with older releases, there may be some changes that require manual intervention. Read below and take a look at the UPGRADE guide.
Breaking changes (PLEASE READ!)
- Renamed
LinstorNodeSet
toLinstorSatelliteSet
. This brings the operator in line with other LINSTOR resources.
ExistingLinstorNodeSet
resources will automatically be migrated toLinstorSatelliteSet
. - Renamed
LinstorControllerSet
toLinstorController
. The old name implied the existence of multiple (separate)
controllers. ExistingLinstorControllerSet
resources will automatically be migrated toLinstorController
. - Helm values renamed to align with new CRD names:
operator.controllerSet
tooperator.controller
operator.nodeSet
tooperator.satelliteSet
- Renamed
kernelModImage
tokernelModuleInjectionImage
- Renamed
drbdKernelModuleInjectionMode
toKernelModuleInjectionMode
Added
v1
of all CRDs- Central value for controller image pull policy of all pods. Use
--set global.imagePullPolicy=<value>
on
helm deployment. charts/piraeus/values.cn.yaml
a set of helm values for faster image download for CN users.- Allow specifying resource requirements for all pods. In helm you can set:
etcd.resources
for etcd containersstork.storkResources
for stork plugin resourcesstork.schedulerResources
for the kube-scheduler deployed for use with storkcsi-snapshotter.resources
for the cluster snapshotter controllercsi.resources
for all CSI related containers. for brevity, there is only one setting for ALL CSI containers. They
are all stateless go process which use the same amount of resources.operator.resources
for operator containersoperator.controller.resources
for LINSTOR controller containersoperator.satelliteSet.resources
for LINSTOR satellite containersoperator.satelliteSet.kernelModuleInjectionResources
for kernel module injector/builder containers
- Components deployed by the operator can now run with multiple replicas. Components
elect a leader, that will take on the actual work as long as it is active. Should one
pod go down, another replica will take over.
Currently these components support multiple replicas:etcd
=> setetcd.replicas
to the desired countstork
=> setstork.replicas
to the desired count for stork scheduler and controllersnapshot-controller
=> setcsi-snapshotter.replicas
to the desired count for cluster-wide CSI snapshot controllercsi-controller
=> setcsi.controllerReplicas
to the desired count for the linstor CSI controlleroperator
=> setoperator.replicas
to have multiple replicas of the operator running
- Reference docs for all helm settings. Link
stork.schedulerTag
can override the automatically chosen tag for thekube-scheduler
image.
Previously, the tag always matched the kubernetes release.
Changed
- Node scheduling no longer relies on
linstor.linbit.com/piraeus-node
labels. Instead, all CRDs support
setting pod affinity and tolerations.
In detail:linstorcsidrivers
gained 4 new resource keys, with no change in default behaviour:nodeAffinity
affinity passed to the csi nodesnodeTolerations
tolerations passed to the csi nodescontrollerAffinity
affinity passed to the csi controllercontrollerTolerations
tolerations passed to the csi controller
linstorcontrollerset
gained 2 new resource keys, with no change in default behaviour:affinity
affinity passed to the linstor controller podtolerations
tolerations passed to the linstor controller pod
linstornodeset
gained 2 new resource keys, with change in default behaviour:affinity
affinity passed to the linstor controller podtolerations
tolerations passed to the linstor controller pod
- Controller is now a Deployment instead of StatefulSet.
- Renamed
kernelModImage
tokernelModuleInjectionImage
- Renamed
drbdKernelModuleInjectionMode
toKernelModuleInjectionMode
v1.0.0-rc1
This is the first release candidate for v1.0.0, our first stable release!
While we tried to stay compatible with older releases, there may be some changes that require manual intervention. Read below and take a look at the UPGRADE guide.
Breaking changes (PLEASE READ!)
- Renamed
LinstorNodeSet
toLinstorSatelliteSet
. This brings the operator in line with other LINSTOR resources.
ExistingLinstorNodeSet
resources will automatically be migrated toLinstorSatelliteSet
. - Renamed
LinstorControllerSet
toLinstorController
. The old name implied the existence of multiple (separate)
controllers. ExistingLinstorControllerSet
resources will automatically be migrated toLinstorController
. - Helm values renamed to align with new CRD names:
operator.controllerSet
tooperator.controller
operator.nodeSet
tooperator.satelliteSet
Added
v1
of all CRDs- Central value for controller image pull policy of all pods. Use
--set global.imagePullPolicy=<value>
on
helm deployment. charts/piraeus/values.cn.yaml
a set of helm values for faster image download for CN users.- Allow specifying resource requirements for all pods. In helm you can set:
etcd.resources
for etcd containersstork.storkResources
for stork plugin resourcesstork.schedulerResources
for the kube-scheduler deployed for use with storkcsi-snapshotter.resources
for the cluster snapshotter controllercsi.resources
for all CSI related containers. for brevity, there is only one setting for ALL CSI containers. They
are all stateless go process which use the same amount of resources.operator.resources
for operator containersoperator.controller.resources
for LINSTOR controller containersoperator.satelliteSet.resources
for LINSTOR satellite containersoperator.satelliteSet.kernelModuleInjectionResources
for kernel module injector/builder containers
- Components deployed by the operator can now run with multiple replicas. Components
elect a leader, that will take on the actual work as long as it is active. Should one
pod go down, another replica will take over.
Currently these components support multiple replicas:etcd
=> setetcd.replicas
to the desired countstork
=> setstork.replicas
to the desired count for stork scheduler and controllersnapshot-controller
=> setcsi-snapshotter.replicas
to the desired count for cluster-wide CSI snapshot controllercsi-controller
=> setcsi.controllerReplicas
to the desired count for the linstor CSI controlleroperator
=> setoperator.replicas
to have multiple replicas of the operator running
Changed
- Node scheduling no longer relies on
linstor.linbit.com/piraeus-node
labels. Instead, all CRDs support
setting pod affinity and tolerations.
In detail:linstorcsidrivers
gained 4 new resource keys, with no change in default behaviour:nodeAffinity
affinity passed to the csi nodesnodeTolerations
tolerations passed to the csi nodescontrollerAffinity
affinity passed to the csi controllercontrollerTolerations
tolerations passed to the csi controller
linstorcontrollerset
gained 2 new resource keys, with no change in default behaviour:affinity
affinity passed to the linstor controller podtolerations
tolerations passed to the linstor controller pod
linstornodeset
gained 2 new resource keys, with change in default behaviour:affinity
affinity passed to the linstor controller podtolerations
tolerations passed to the linstor controller pod
- Controller is now a Deployment instead of StatefulSet.
- Renamed
kernelModImage
tokernelModuleInjectionImage
- Renamed
drbdKernelModuleInjectionMode
toKernelModuleInjectionMode
v0.5.0
Added
-
Support volume resizing with newer CSI versions.
-
A new Helm chart
csi-snapshotter
that deploys extra components needed for volume snapshots. -
Add new kmod injection mode
DepsOnly
. Will try load kmods for LINSTOR layers from the host. DeprecatesNone
. -
Automatic deployment of Stork scheduler configured for LINSTOR.
Removed
Changed
- Replaced
bitnami/etcd
dependency with vendored custom version
Some important keys for theetcd
helm chart have changed:statefulset.replicaCount
->replicas
persistence.enabled
->persistentVolume.enabled
persistence.size
->persistentVolume.storage
àuth.rbac
was removed: use tls certificatesauth.peer.useAutoTLS
was removedenvVarsConfigMap
was removed- When using etcd with TLS enabled:
- For peer communication, peers need valid certificates for
*.<release-name>-etcd
(was.<release-name>>-etcd-headless.<namespace>.svc.cluster.local
) - For client communication, servers need valid certificates for
*.<release-name>-etcd
(was.<release-name>>-etcd.<namespace>.svc.cluster.local
)
- For peer communication, peers need valid certificates for
Code
v0.4.1
[v0.4.1] - 2020-06-10
Added
- Automatic storage pool creation via
automaticStorageType
onLinstorNodeSet
. If this option is set, LINSTOR
will create a storage pool based on all available devices on a node.
Changed
- Moved storage documentation to the storage guide
- Helm: update default images
Code
v0.4.0
[v0.4.0] - 2020-06-05
Added
- Secured database connection for Linstor: When using the
etcd
connector, you can specify a secret containing a CA certificate to switch from HTTP to HTTPS communication. - Secured connection between Linstor components: You can specify TLS keys to secure the communication between controller and satellite
- Secure storage with LUKS: You can specify the master passphrase used by Linstor when creating encrypted volumes when installing via Helm.
- Authentication with etcd using TLS client certificates.
- Secured connection between linstor-client and controller (HTTPS). More in the security guide
- Linstor controller endpoint can now be customized for all resources. If not specified, the old default values will be filled in.
Removed
- NodeSet service (
piraeus-op-ns
) was replaced by the ControllerSet service (piraeus-op-cs
) everywhere
Changed
- CSI storage driver setup: move setup from helm to go operator. This is mostly an internal change. These changes may be of note if you used a non-default CSI configuration:
- helm value
csi.image
was renamed tocsi.pluginImage
- CSI deployment can be controlled by a new resource
linstorcsidrivers.piraeus.linbit.com
- helm value
- PriorityClasses are not automatically created. When not specified, the priority class is:
- "system-node-critical", if deployed in "kube-system" namespace
- default PriorityClass in other namespaces
- RBAC rules for CSI: creation moved to deployment step (Helm/OLM). ServiceAccounts should be specified in CSI resource. If no ServiceAccounts are named, the implicitly created accounts from previous deployments will be used.
- Helm: update default images