Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle support for secondary resource provided by CRD when not present at Operator start #6841

Open
rwhundley opened this issue Oct 12, 2024 · 0 comments
Labels
language/go Issue is related to a Go operator project

Comments

@rwhundley
Copy link

rwhundley commented Oct 12, 2024

Type of question

Best practices
How to implement a specific feature

Question

What did you do?

Our project needs to account for the presence/absence of various CRDs, and, depending upon the settings on our Operator's primary resource, the controller needs to handle those secondary resources. Examples include:

  • OpenShift Routes - they should only be created when Routes are available on the Cluster
  • CRDs/CRs that are defined and managed by other Operators using separate GVs
  • CRDs/CRs that are defined and managed by other Operators using the same GV as our Operator's primary resource

As far as what we did, we wrote functions similar to this response to a previous issue here for the first and second bullets, but, for the third, we used a check of the APIResources listed for the GV to see if that GVK was registered. Using those, we have the following in the init in our main.go:

func init() {
	cfg, err := config.GetConfig()
	if err != nil {
		return
	}

	dc, err := discovery.NewDiscoveryClientForConfig(cfg)
	if err != nil {
		return
	}

	utilruntime.Must(clientgoscheme.AddToScheme(scheme))
	utilruntime.Must(oidcsecurityv1.AddToScheme(scheme))
	utilruntime.Must(certmgrv1.AddToScheme(scheme))

	if controllercommon.ClusterHasRouteGroupVersion(dc) {
		utilruntime.Must(routev1.AddToScheme(scheme))
	}
	if controllercommon.ClusterHasOperandRequestAPIResource(dc) {
		utilruntime.Must(operatorv1alpha1.AddODLMEnabledToScheme(scheme))
	} else {
		utilruntime.Must(operatorv1alpha1.AddToScheme(scheme))
	}
	if controllercommon.ClusterHasZenExtensionGroupVersion(dc) {
		utilruntime.Must(zenv1.AddToScheme(scheme))
	}
	//+kubebuilder:scaffold:scheme
}

In the SetupWithManager function for the controller, we have the following:

func (r *AuthenticationReconciler) SetupWithManager(mgr ctrl.Manager) error {
	builder := ctrl.NewControllerManagedBy(mgr).
		Owns(&corev1.ConfigMap{}).
		Owns(&corev1.Secret{}).
		Owns(&certmgr.Certificate{}).
		Owns(&batchv1.Job{}).
		Owns(&corev1.Service{}).
		Owns(&net.Ingress{}).
		Owns(&appsv1.Deployment{})

	if ctrlcommon.ClusterHasOpenShiftConfigGroupVerison(&r.DiscoveryClient) {
		builder.Owns(&routev1.Route{})
	}
	if ctrlcommon.ClusterHasZenExtensionGroupVersion(&r.DiscoveryClient) {
		builder.Owns(&zenv1.ZenExtension{})
	}
	if ctrlcommon.ClusterHasOperandRequestAPIResource(&r.DiscoveryClient) {
		builder.Owns(&operatorv1alpha1.OperandRequest{})
	}

	return builder.For(&operatorv1alpha1.Authentication{}).
		Complete(r)
}

What did you expect to see?

Ideally, if someone ended up installing the CRDs that this Operator would have skipped loading support for here, the Operator could detect this and load the missing GVs into its scheme. Alternatively, if there's a way for them to always be loaded, but not lead to the Operator exiting when it's not able to watch for them at the start, I would be happy with that as well.

What did you see instead? Under which circumstances?

So far, it seems like by adding them to the scheme in our main.go's init, the Manager will end up trying to watch for them, leading to exits due to failed watches. Makes sense - the resources it's trying to look for are not supported yet. I'd prefer a solution that doesn't mandate a Pod restart when the new APIs become supported.

Environment

Operator type:

/language go

Kubernetes cluster type:

Vanilla and OpenShift

$ operator-sdk version

operator-sdk version: "v1.37.0", commit: "819984d4c1a51c8ff2ef6c23944554148ace0752", kubernetes version: "1.29.0", go version: "go1.21.13", GOOS: "darwin", GOARCH: "arm64"

$ go version (if language is Go)

go version go1.23.1 darwin/arm64

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"05d83eff7e17160e679898a2a5cd6019ec252c49", GitTreeState:"clean", BuildDate:"2023-06-07T16:46:24Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.13-eks-a737599", GitCommit:"9183cd02caedacf6a14583843262d53d6244fc4a", GitTreeState:"clean", BuildDate:"2024-08-26T21:27:49Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.26) and server (1.28) exceeds the supported minor version skew of +/-1

Additional context

@openshift-ci openshift-ci bot added the language/go Issue is related to a Go operator project label Oct 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
language/go Issue is related to a Go operator project
Projects
None yet
Development

No branches or pull requests

1 participant