-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add multi-master option to galera #260
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: zzzeek The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
a78e2c6
to
274cfe0
Compare
@@ -145,7 +147,7 @@ func (r *MariaDBDatabaseReconciler) Reconcile(ctx context.Context, req ctrl.Requ | |||
|
|||
// here we know that Galera exists so add a finalizer to ourselves and to the db CR. Before this point there is no reason to have a finalizer on ourselves as nothing to cleanup. | |||
if instance.DeletionTimestamp.IsZero() || isNewInstance { // this condition can be removed if you wish as it is always true at this point otherwise we would returned earlier. | |||
if controllerutil.AddFinalizer(dbGalera, fmt.Sprintf("%s-%s", helper.GetFinalizer(), instance.Name)) { | |||
if dbGalera.DeletionTimestamp.IsZero() && controllerutil.AddFinalizer(dbGalera, fmt.Sprintf("%s-%s", helper.GetFinalizer(), instance.Name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dciabrin can you review this line, without this the kuttl tests would run into a stack trace as trying to reconcile the mariadbdatabase after galera were deleted would raise here (this happens more frequently due to the Watch I added in this PR). note that by checking this, we do continue on to add a finalizer to the mariadbdatabase itself if one was not there already, but I think that would have been there anyway
/retest |
/test mariadb-operator-build-deploy-kuttl |
iteratively adding elements from PR openstack-k8s-operators#260 to see what's making it hang
iteratively adding elements from PR openstack-k8s-operators#260 to see what's making it hang
This change adds the new value enableMultiMaster to the Galera CR defaulting to false; when set to true, the "statefulset.kubernetes.io/pod-name" key is removed from the mariadb service spec, and when set to false it is restored. This has the effect of publishing the service either under one specific pod for active/passive use, or using any pod for any request which is multi-master use. the value of the attribute is also propigated out to the status within the MariaDBDatabase CR, which is then consumable by openstack services in a similar manner as the TLS settings. Openstack services will then want to set wsrep_sync_wait accordingly for AP or MM mode, which will be proposed in followup changes. New teardown steps added to kuttl tests galera_cluster_restart and galera_log_to_disk as there seem to be pvcs being not cleaned up on CI leading to hangs Fixes: OSPRH-7406
looks good to me, @dciabrin when you are also happy with it, we could get it in. |
/lgtm |
This change adds the new value enableMultiMaster to the Galera CR defaulting to false; when set to true, the "statefulset.kubernetes.io/pod-name" key is removed from the mariadb service spec, and when set to false it is restored. This has the effect of publishing the service either under one specific pod for active/passive use, or using any pod for any request which is multi-master use. the value of the attribute is also propigated out to the status within the MariaDBDatabase CR, which is then consumable by openstack
services in a similar manner as the TLS settings. Openstack
services will then want to set wsrep_sync_wait accordingly
for AP or MM mode, which will be proposed in followup changes.
Fixes: OSPRH-7406